ED-T2V: An Efficient Training Framework for Diffusion-based Text-to-Video Generation.

IJCNN(2023)

Cited 0|Views40
No score
Abstract
Diffusion models have achieved remarkable performance on image generation. However, It is difficult to reproduce this success on video generation because of expensive training cost. In fact, pretrained image generation models have already acquired visual generation capabilities and could be utilized for video generation. Thus, we propose an Efficient training framework for Diffusion-based Text-to-Video generation (ED-T2V), which is built on a pretrained text-to-image generation model. To model the temporal dynamic information, we propose temporal transformer blocks with novel identity attention and temporal cross-attention. ED-T2V has the following advantages: 1) most of the parameters of pretrained model are frozen to inherit the generation capabilities and reduce the training cost; 2) the identity attention requires the currently generated frame to attend to all positions of its previous frame, thus providing an efficient way to keep main content consistent across frames and enable movement generation; 3) the temporal cross-attention is proposed to construct associations between textual descriptions and multiple video tokens in the time dimension, which could better model video movement than traditional cross-attention methods. With the aforementioned benefits, ED-T2V not only significantly reduces the training cost of video diffusion models, but also has excellent generation fidelity and controllability.
More
Translated text
Key words
diffusion-based text-to-video generation,ED-T2V,efficient training framework,identity attention,movement generation,pretrained text-to-image generation model,temporal cross-attention,temporal dynamic information,temporal transformer blocks,textual description,video diffusion models,video movement,video tokens,visual generation capabilities
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined