Efficient End-to-End Visual Document Understanding with Rationale Distillation
arxiv(2023)
摘要
Understanding visually situated language requires interpreting complex
layouts of textual and visual elements. Pre-processing tools, such as optical
character recognition (OCR), can map document image inputs to textual tokens,
then large language models (LLMs) can reason over text. However, such methods
have high computational and engineering complexity. Can small pretrained
image-to-text models accurately understand visual documents through similar
recognition and reasoning steps instead? We propose Rationale Distillation
(RD), which incorporates the outputs of OCR tools, LLMs, and larger multimodal
models as intermediate "rationales", and trains a small student model to
predict both rationales and answers. On three visual document understanding
benchmarks representing infographics, scanned documents, and figures, our
Pix2Struct (282M parameters) student model finetuned with RD outperforms the
base model by 4-5
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要