Integrating Explanations in Learning LTL Specifications from Demonstrations
arxiv(2024)
摘要
This paper investigates whether recent advances in Large Language Models
(LLMs) can assist in translating human explanations into a format that can
robustly support learning Linear Temporal Logic (LTL) from demonstrations. Both
LLMs and optimization-based methods can extract LTL specifications from
demonstrations; however, they have distinct limitations. LLMs can quickly
generate solutions and incorporate human explanations, but their lack of
consistency and reliability hampers their applicability in safety-critical
domains. On the other hand, optimization-based methods do provide formal
guarantees but cannot process natural language explanations and face
scalability challenges. We present a principled approach to combining LLMs and
optimization-based methods to faithfully translate human explanations and
demonstrations into LTL specifications. We have implemented a tool called
Janaka based on our approach. Our experiments demonstrate the effectiveness of
combining explanations with demonstrations in learning LTL specifications
through several case studies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要