Identifiable and interpretable nonparametric factor analysis
arXiv (Cornell University)(2023)
Abstract
Factor models have been widely used to summarize the variability of
high-dimensional data through a set of factors with much lower dimensionality.
Gaussian linear factor models have been particularly popular due to their
interpretability and ease of computation. However, in practice, data often
violate the multivariate Gaussian assumption. To characterize higher-order
dependence and nonlinearity, models that include factors as predictors in
flexible multivariate regression are popular, with GP-LVMs using Gaussian
process (GP) priors for the regression function and VAEs using deep neural
networks. Unfortunately, such approaches lack identifiability and
interpretability and tend to produce brittle and non-reproducible results. To
address these problems by simplifying the nonparametric factor model while
maintaining flexibility, we propose the NIFTY framework, which parsimoniously
transforms uniform latent variables using one-dimensional nonlinear mappings
and then applies a linear generative model. The induced multivariate
distribution falls into a flexible class while maintaining simple computation
and interpretation. We prove that this model is identifiable and empirically
study NIFTY using simulated data, observing good performance in density
estimation and data visualization. We then apply NIFTY to bird song data in an
environmental monitoring application.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined