Chrome Extension
WeChat Mini Program
Use on ChatGLM

Autoregressive Co-Training for Learning Discrete Speech Representation.

Conference of the International Speech Communication Association (INTERSPEECH)(2022)

Cited 2|Views5
No score
Abstract
While several self-supervised approaches for learning discrete speech representation have been proposed, it is unclear how these seemingly similar approaches relate to each other. In this paper, we consider a generative model with discrete latent variables that learns a discrete representation for speech. The objective of learning the generative model is formulated as information-theoretic co-training. Besides the wide generality, the objective can be optimized with several approaches, subsuming HuBERT-like training and vector quantization for learning discrete representation. Empirically, we find that the proposed approach learns discrete representation that is highly correlated with phonetic units, more correlated than HuBERT-like training and vector quantization.
More
Translated text
Key words
Self-supervised learning, co-training, discrete representation learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined