DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines
arxiv(2024)
摘要
Diffusion models have emerged as dominant performers for image generation. To
support training large diffusion models, this paper studies pipeline parallel
training of diffusion models and proposes DiffusionPipe, a synchronous pipeline
training system that advocates innovative pipeline bubble filling technique,
catering to structural characteristics of diffusion models. State-of-the-art
diffusion models typically include trainable (the backbone) and non-trainable
(e.g., frozen input encoders) parts. We first unify optimal stage partitioning
and pipeline scheduling of single and multiple backbones in representative
diffusion models with a dynamic programming approach. We then propose to fill
the computation of non-trainable model parts into idle periods of the pipeline
training of the backbones by an efficient greedy algorithm, thus achieving high
training throughput. Extensive experiments show that DiffusionPipe can achieve
up to 1.41x speedup over pipeline parallel methods and 1.28x speedup over data
parallel training on popular diffusion models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要