D'OH: Decoder-Only random Hypernetworks for Implicit Neural Representations
CoRR(2024)
Abstract
Deep implicit functions have been found to be an effective tool for
efficiently encoding all manner of natural signals. Their attractiveness stems
from their ability to compactly represent signals with little to no off-line
training data. Instead, they leverage the implicit bias of deep networks to
decouple hidden redundancies within the signal. In this paper, we explore the
hypothesis that additional compression can be achieved by leveraging the
redundancies that exist between layers. We propose to use a novel run-time
decoder-only hypernetwork - that uses no offline training data - to better
model this cross-layer parameter redundancy. Previous applications of
hyper-networks with deep implicit functions have applied feed-forward
encoder/decoder frameworks that rely on large offline datasets that do not
generalize beyond the signals they were trained on. We instead present a
strategy for the initialization of run-time deep implicit functions for
single-instance signals through a Decoder-Only randomly projected Hypernetwork
(D'OH). By directly changing the dimension of a latent code to approximate a
target implicit neural architecture, we provide a natural way to vary the
memory footprint of neural representations without the costly need for neural
architecture search on a space of alternative low-rate structures.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined