Learning immune receptor representations with protein language models

arxiv(2024)

引用 0|浏览2
暂无评分
摘要
Protein language models (PLMs) learn contextual representations from protein sequences and are profoundly impacting various scientific disciplines spanning protein design, drug discovery, and structural predictions. One particular research area where PLMs have gained considerable attention is adaptive immune receptors, whose tremendous sequence diversity dictates the functional recognition of the adaptive immune system. The self-supervised nature underlying the training of PLMs has been recently leveraged to implement a variety of immune receptor-specific PLMs. These models have demonstrated promise in tasks such as predicting antigen-specificity and structure, computationally engineering therapeutic antibodies, and diagnostics. However, challenges including insufficient training data and considerations related to model architecture, training strategies, and data and model availability must be addressed before fully unlocking the potential of PLMs in understanding, translating, and engineering immune receptors.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要