Incremental Learning of Goal-Directed Actions in a Dynamic Environment by a Robot Using Active Inference

ENTROPY(2023)

Cited 0|Views1
No score
Abstract
This study investigated how a physical robot can adapt goal-directed actions in dynamicallychanging environments, in real-time, using an active inference-based approach with incrementallearning from human tutoring examples. Using our active inference-based model, while goodgeneralization can be achieved with appropriate parameters, when faced with sudden, large changesin the environment, a human may have to intervene to correct actions of the robot in order to reachthe goal, as a caregiver might guide the hands of a child performing an unfamiliar task. In orderfor the robot to learn from the human tutor, we propose a new scheme to accomplish incrementallearning from these proprioceptive-exteroceptive experiences combined with mental rehearsal ofpast experiences. Our experimental results demonstrate that using only a few tutoring examples,the robot using our model was able to significantly improve its performance on new tasks withoutcatastrophic forgetting of previously learned tasks.
More
Translated text
Key words
incremental learning,free energy principle,active inference,goal-directed action planning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined