BAKU: An Efficient Transformer for Multi-Task Policy Learning
arxiv(2024)
摘要
Training generalist agents capable of solving diverse tasks is challenging,
often requiring large datasets of expert demonstrations. This is particularly
problematic in robotics, where each data point requires physical execution of
actions in the real world. Thus, there is a pressing need for architectures
that can effectively leverage the available training data. In this work, we
present BAKU, a simple transformer architecture that enables efficient learning
of multi-task robot policies. BAKU builds upon recent advancements in offline
imitation learning and meticulously combines observation trunks, action
chunking, multi-sensory observations, and action heads to substantially improve
upon prior work. Our experiments on 129 simulated tasks across LIBERO,
Meta-World suite, and the Deepmind Control suite exhibit an overall 18
absolute improvement over RT-1 and MT-ACT, with a 36
LIBERO benchmark. On 30 real-world manipulation tasks, given an average of just
17 demonstrations per task, BAKU achieves a 91
robot are best viewed at https://baku-robot.github.io/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要