Large Language Models are Few-Shot Health Learners

Xin Liu,Daniel McDuff,Geza Kovacs, Isaac Galatzer-Levy,Jacob Sunshine, Jiening Zhan,Ming-Zher Poh, Shun Liao, Paolo Di Achille,Shwetak Patel

CoRR(2023)

引用 11|浏览75
暂无评分
摘要
Large language models (LLMs) can capture rich representations of concepts that are useful for real-world tasks. However, language alone is limited. While existing LLMs excel at text-based inferences, health applications require that models be grounded in numerical data (e.g., vital signs, laboratory values in clinical domains; steps, movement in the wellness domain) that is not easily or readily expressed as text in existing training corpus. We demonstrate that with only few-shot tuning, a large language model is capable of grounding various physiological and behavioral time-series data and making meaningful inferences on numerous health tasks for both clinical and wellness contexts. Using data from wearable and medical sensor recordings, we evaluate these capabilities on the tasks of cardiac signal analysis, physical activity recognition, metabolic calculation (e.g., calories burned), and estimation of stress reports and mental health screeners.
更多
查看译文
关键词
large language models,learners,health,few-shot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要