LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models
arxiv(2024)
摘要
In the era of AIGC, the demand for low-budget or even on-device applications
of diffusion models emerged. In terms of compressing the Stable Diffusion
models (SDMs), several approaches have been proposed, and most of them
leveraged the handcrafted layer removal methods to obtain smaller U-Nets, along
with knowledge distillation to recover the network performance. However, such a
handcrafting manner of layer removal is inefficient and lacks scalability and
generalization, and the feature distillation employed in the retraining phase
faces an imbalance issue that a few numerically significant feature loss terms
dominate over others throughout the retraining process. To this end, we
proposed the layer pruning and normalized distillation for compressing
diffusion models (LAPTOP-Diff). We, 1) introduced the layer pruning method to
compress SDM's U-Net automatically and proposed an effective one-shot pruning
criterion whose one-shot performance is guaranteed by its good additivity
property, surpassing other layer pruning and handcrafted layer removal methods,
2) proposed the normalized feature distillation for retraining, alleviated the
imbalance issue. Using the proposed LAPTOP-Diff, we compressed the U-Nets of
SDXL and SDM-v1.5 for the most advanced performance, achieving a minimal 4.0
decline in PickScore at a pruning ratio of 50
minimal PickScore decline is 8.2
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要