Hierarchical Patch-wise Diffusion Models for High-Resolution Video Generation
CVPR 2024(2024)
Abstract
Diffusion models have demonstrated remarkable performance in image and video
synthesis. However, scaling them to high-resolution inputs is challenging and
requires restructuring the diffusion pipeline into multiple independent
components, limiting scalability and complicating downstream applications. This
makes it very efficient during training and unlocks end-to-end optimization on
high-resolution videos. We improve PDMs in two principled ways. First, to
enforce consistency between patches, we develop deep context fusion – an
architectural technique that propagates the context information from low-scale
to high-scale patches in a hierarchical manner. Second, to accelerate training
and inference, we propose adaptive computation, which allocates more network
capacity and computation towards coarse image details. The resulting model sets
a new state-of-the-art FVD score of 66.32 and Inception Score of 87.68 in
class-conditional video generation on UCF-101 256^2, surpassing recent
methods by more than 100
a base 36× 64 low-resolution generator for high-resolution 64 ×
288 × 512 text-to-video synthesis. To the best of our knowledge, our
model is the first diffusion-based architecture which is trained on such high
resolutions entirely end-to-end. Project webpage:
https://snap-research.github.io/hpdm.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined