Generating Natural Language From Logic Expressions With Structural Representation.

IEEE ACM Trans. Audio Speech Lang. Process.(2023)

引用 0|浏览18
暂无评分
摘要
Incorporating logic reasoning with deep neural networks (DNNs) is an important challenge in machine learning. In this article, we study the problem of converting logical expressions into natural language. In particular, given a sequential logic expression, the goal is to generate its corresponding natural sentence. Since the information in a logic expression often has a hierarchical structure, a sequence-to-sequence baseline struggles to capture the full dependencies between words, and hence it often generates incorrect sentences. To alleviate this problem, we propose a model to convert Structural Logic Expressions into Natural Language (SLEtoNL). SLEtoNL converts sequential logic expressions into structural representation and leverages structural encoders to capture the dependencies between nodes. The quantitative and qualitative analyses demonstrate that our proposed method outperforms the seq2seq model, which is based on the sequential representation, and outperforms strong pretrained language models (e.g., T5, BART, GPT3) with a large margin (28.6 in BLEU3) in out-of-distribution evaluation.
更多
查看译文
关键词
Natural languages, Cognition, Analytical models, Task analysis, Semantics, Computational modeling, Training, Natural language generation, logic expressions, Tree-LSTM, graph convolutional networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要