Corrective feedback guides human perceptual decision-making by informing about the world state rather than rewarding its choice
PLOS Biology(2023)
Abstract
Corrective feedback received on perceptual decisions is crucial for adjusting decision-making strategies to improve future choices. However, its complex interaction with other decision components, such as previous stimuli and choices, challenges a principled account of how it shapes subsequent decisions. One popular approach, based on animal behavior and extended to human perceptual decision-making, employs ‘reinforcement learning,’ a principle proven successful in reward-based decision-making. The core idea behind this approach is that decision-makers, although engaged in a perceptual task, treat corrective feedback as rewards from which they learn choice values. Here, we explore an alternative idea, which is that humans consider corrective feedback on perceptual decisions as evidence of the actual state of the world rather than as rewards for their choices. By implementing these ‘feedback-as-reward’ and ‘feedback-as-evidence’ hypotheses on a shared learning platform, we show that the latter outperforms the former in explaining how corrective feedback adjusts the decision-making strategy along with past stimuli and choices. Our work suggests that humans learn about what has happened in their environment rather than the values of their own choices through corrective feedback during perceptual decision-making.
### Competing Interest Statement
The authors have declared no competing interest.
* PDM
: perceptual decision-making
VDM
: value-based decision-making
RL
: reinforcement learning
BDT
: Bayesian decision theory
toi
: trial of interest
PSE
: point of subjective equality
BMBU
: Bayesian model of boundary updating
AICc
: Akaike information criterion corrected for sample size
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined