Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation
arxiv(2023)
摘要
Implicit Neural Representation (INR) is an innovative approach for
representing complex shapes or objects without explicitly defining their
geometry or surface structure. Instead, INR represents objects as continuous
functions. Previous research has demonstrated the effectiveness of using neural
networks as INR for image compression, showcasing comparable performance to
traditional methods such as JPEG. However, INR holds potential for various
applications beyond image compression. This paper introduces Rapid-INR, a novel
approach that utilizes INR for encoding and compressing images, thereby
accelerating neural network training in computer vision tasks. Our methodology
involves storing the whole dataset directly in INR format on a GPU, mitigating
the significant data communication overhead between the CPU and GPU during
training. Additionally, the decoding process from INR to RGB format is highly
parallelized and executed on-the-fly. To further enhance compression, we
propose iterative and dynamic pruning, as well as layer-wise quantization,
building upon previous work. We evaluate our framework on the image
classification task, utilizing the ResNet-18 backbone network and three
commonly used datasets with varying image sizes. Rapid-INR reduces memory
consumption to only about 5
achieves a maximum 6× speedup over the PyTorch training pipeline, as
well as a maximum 1.2x speedup over the DALI training pipeline, with only a
marginal decrease in accuracy. Importantly, Rapid-INR can be readily applied to
other computer vision tasks and backbone networks with reasonable engineering
efforts. Our implementation code is publicly available at
https://github.com/sharc-lab/Rapid-INR.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要