The universe is worth 643 pixels: convolution neural network and vision transformers for cosmology

JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS(2023)

引用 0|浏览2
暂无评分
摘要
We present a novel approach for estimating cosmological parameters, Omega(m), sigma(8), omega(0), and one derived parameter, S-8, from 3D lightcone data of dark matter halos in redshift space covering a sky area of 40(degrees) x 40(degrees) and redshift range of 0.3 < z < 0.8, binned to 643 voxels. Using two deep learning algorithms - Convolutional Neural Network (CNN) and Vision Transformer (ViT) - we compare their performance with the standard two-point correlation (2pcf) function. Our results indicate that CNN yields the best performance, while ViT also demonstrates significant potential in predicting cosmological parameters. By combining the outcomes of Vision Transformer, Convolution Neural Network, and 2pcf, we achieved a sub-stantial reduction in error compared to the 2pcf alone. To better understand the inner work-ings of the machine learning algorithms, we employed the Grad-CAM method to investigate the sources of essential information in heatmaps of the CNN and ViT. Our findings suggest that the algorithms focus on different parts of the density field and redshift depending on which parameter they are predicting. This proof-of-concept work paves the way for incorpo-rating deep learning methods to estimate cosmological parameters from large-scale structures, potentially leading to tighter constraints and improved understanding of the Universe.
更多
查看译文
关键词
cosmological parameters from LSS,Machine learning,dark energy experiments,galaxy clustering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要