CITR: A Coordinate-Invariant Task Representation for Robotic Manipulation

So Peter, Cabral Muchacho Rafael Ignacio, Kirschner Robin Jeanne,Swikir Abdalla,Figueredo Luis, Abu-Dakka Fares,Haddadin Sami

ICRA 2024(2024)

Cited 0|Views0
No score
Abstract
The basis for robotics skill learning is an adequate representation of manipulation tasks based on their physical properties. As manipulation tasks are inherently invariant to the choice of reference frame, an ideal task representation would also exhibit this property. Nevertheless, most robotic learning approaches use unprocessed, coordinate-dependent robot state data for learning new skills, thus inducing challenges regarding the interpretability and transferability of the learned models. In this paper, we propose a transformation from spatial measurements to a coordinate-invariant feature space, based on the pairwise inner product of the input measurements. We describe and mathematically deduce the concept, establish the task fingerprints as an intuitive image-based representation, experimentally collect task fingerprints, and demonstrate the usage of the representation for task classification. This representation motivates further research on data-efficient and transferable learning methods for online manipulation task classification and task-level perception.
More
Translated text
Key words
Representation Learning,Learning Categories and Concepts,Dexterous Manipulation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined