Reward Foraging Task And Model-Based Analysis Reveal How Fruit Flies Learn Value Of Available Options

PLOS ONE(2020)

引用 4|浏览2
暂无评分
摘要
Foraging animals have to evaluate, compare and select food patches in order to increase their fitness. Understanding what drives foraging decisions requires careful manipulation of the value of alternative options while monitoring animals choices. Value-based decision-making tasks in combination with formal learning models have provided both an experimental and theoretical framework to study foraging decisions in lab settings. While these approaches were successfully used in the past to understand what drives choices in mammals, very little work has been done on fruit flies. This is despite the fact that fruit flies have served as model organism for many complex behavioural paradigms. To fill this gap we developed a single-animal, trial-based decision making task, where freely walking flies experienced optogenetic sugar-receptor neuron stimulation. We controlled the value of available options by manipulating the probabilities of optogenetic stimulation. We show that flies integrate reward history of chosen options and forget value of unchosen options. We further discover that flies assign higher values to rewards experienced early in the behavioural session, consistent with formal reinforcement learning models. Finally, we also show that the probabilistic rewards affect walking trajectories of flies, suggesting that accumulated value is controlling the navigation vector of flies in a graded fashion. These findings establish the fruit fly as a model organism to explore the genetic and circuit basis of reward foraging decisions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要