Compositional Program Generation for Few-Shot Systematic Generalization
CoRR(2023)
摘要
Compositional generalization is a key ability of humans that enables us to
learn new concepts from only a handful examples. Neural machine learning
models, including the now ubiquitous Transformers, struggle to generalize in
this way, and typically require thousands of examples of a concept during
training in order to generalize meaningfully. This difference in ability
between humans and artificial neural architectures, motivates this study on a
neuro-symbolic architecture called the Compositional Program Generator (CPG).
CPG has three key features: modularity, composition, and
abstraction, in the form of grammar rules, that enable it to
generalize both systematically to new concepts in a few-shot manner, as well as
productively by length on various sequence-to-sequence language tasks. For each
input, CPG uses a grammar of the input language and a parser to generate a
parse in which each grammar rule is assigned its own unique semantic module, a
probabilistic copy or substitution program. Instances with the same parse are
always processed with the same composed modules, while those with different
parses may be processed with different modules. CPG learns parameters for the
modules and is able to learn the semantics for new rules and types
incrementally, without forgetting or retraining on rules it's already seen. It
achieves perfect generalization on both the SCAN and COGS benchmarks using just
14 examples for SCAN and 22 examples for COGS – state-of-the-art accuracy with
a 1000x improvement in sample efficiency.
更多查看译文
关键词
generalization,program,generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要