Chrome Extension
WeChat Mini Program
Use on ChatGLM

On Penalty Parameter Selection For Estimating Network Models

MULTIVARIATE BEHAVIORAL RESEARCH(2021)

Cited 29|Views5
No score
Abstract
Network models are gaining popularity as a way to estimate direct effects among psychological variables and investigate the structure of constructs. A key feature of network estimation is determining which edges are likely to be non-zero. In psychology, this is commonly achieved through the graphical lasso regularization method that estimates a precision matrix of Gaussian variables using an -penalty to push small values to zero. A tuning parameter, lambda, controls the sparsity of the network. There are many methods to select lambda, which can lead to vastly different graphs. The most common approach in psychological network applications is to minimize the extended Bayesian information criterion, but the consistency of this method for model selection has primarily been examined in high dimensional settings (i.e., n < p) that are uncommon in psychology. Further, there is some evidence that alternative selection methods may have superior performance. Here, using simulation, we compare four different methods for selecting lambda, including the stability approach to regularization selection (StARS), K-fold cross-validation, the rotation information criterion (RIC), and the extended Bayesian information criterion (EBIC). Our results demonstrate that penalty parameter selection should be made based on data characteristics and the inferential goal (e.g., to increase sensitivity versus to avoid false positives). We end with recommendations for selecting the penalty parameter when using the graphical lasso.
More
Translated text
Key words
Network analysis, partial correlation networks, regularization, simulation study, penalty selection
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined