ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
CoRR(2024)
Abstract
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model
(LLM) designed for embodied interaction, exploring a universal 3D object
understanding with 3D point clouds and languages. ShapeLLM is built upon an
improved 3D encoder by extending ReCon to ReCon++ that benefits from multi-view
image distillation for enhanced geometry understanding. By utilizing ReCon++ as
the 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructed
instruction-following data and tested on our newly human-curated evaluation
benchmark, 3D MM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance
in 3D geometry understanding and language-unified 3D interaction tasks, such as
embodied visual grounding.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined