JOKR: Joint Keypoint Representation for Unsupervised Video Retargeting

COMPUTER GRAPHICS FORUM(2022)

引用 0|浏览16
暂无评分
摘要
In unsupervised video retargeting, content is transferred from one video to another while preserving the original appearance and style, without any additional annotations. While this challenge has seen substantial advancements through the use of deep neural networks, current methods struggle when the source and target videos are of shapes that are different in limb lengths or other body proportions. In this work, we consider this task for the case of objects of different shapes and appearances, that consist of similar skeleton connectivity and depict similar motion. We introduce JOKR-a JOint Keypoint Representation that captures the geometry common to both videos, while being disentangled from their unique styles. Our model first extracts unsupervised keypoints from the given videos. From this representation, two decoders reconstruct geometry and appearance, one for each of the input sequences. By employing an affine-invariant domain confusion term over the keypoints bottleneck, we enforce the unsupervised keypoint representations of both videos to be indistinguishable. This encourages the aforementioned disentanglement between motion and appearance, mapping similar poses from both domains to the same representation. This allows yielding a sequence with the appearance and style of one video, but the content of the other. Our applicability is demonstrated through challenging video pairs compared to state-of-the-art methods. Furthermore, we demonstrate that this geometry-driven representation enables intuitive control, such as temporal coherence and manual pose editing. Videos can be viewed in the supplement HTML.
更多
查看译文
关键词
video retargeting,video generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要