Split-Transformer Impute (STI): A Transformer Framework for Genotype Imputation

biorxiv(2024)

引用 1|浏览18
暂无评分
摘要
Despite recent advances in sequencing technologies, genome-scale datasets continue to have missing bases and genomic segments. Such incomplete datasets can undermine downstream analyses, such as disease risk prediction and association studies. Consequently, the imputation of missing information is a common pre-processing step for which many methodologies have been developed. However, the imputation of genotypes of certain genomic regions and variants, including large structural variants, remains a challenging problem. Here, we present a transformer-based deep learning framework, called a split-transformer impute (STI) model, for accurate genome-scale genotype imputation. Empowered by the attention-based transformer model, STI can be trained for any collection of genomes automatically using self-supervision. STI handles multi-allelic genotypes naturally, unlike other models that need special treatments. STI models automatically learned genome-wide patterns of linkage disequilibrium (LD), evidenced by much higher imputation accuracy in high LD regions. Also, STI models trained through sporadic masking for self-supervision performed well in imputing systematically missing information. Our imputation results on the human 1000 Genomes Project show that STI can achieve high imputation accuracy, comparable to the state-of-the-art genotype imputation methods, with the additional capability to impute multi-allelic structural variants and other types of genetic variants. Moreover, STI showed excellent performance without needing any special presuppositions about the patterns in the underlying data when applied to a collection of yeast genomes, pointing to easy adaptability and application of STI to impute missing genotypes in any species. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要