On Adversarial Mixup Resynthesis
NeurIPS(2019)
摘要
In this paper, we explore new approaches to combining information encoded within the learned representations of auto-encoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.(1)
更多查看译文
关键词
supervised learning,semi-supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络