On the Necessity of Disentangled Representations for Downstream Tasks

ICLR 2023(2023)

引用 0|浏览102
暂无评分
摘要
A disentangled representation encodes generative factors of data in a separable and compact pattern. Thus it is widely believed that such a representation format benefits downstream tasks. In this paper, we challenge the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are not necessary for downstream tasks using neural networks that take learned representations as input. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Moreover, our study reveals that the informativeness of representations best accounts for downstream performance. The positive correlation between informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works.
更多
查看译文
关键词
representation disentanglement,representation learning downstream task
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要