Evaluating the Cognitively-Related Productivity of a Universal Dependency Parser

2021 IEEE 20th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)(2021)

引用 0|浏览13
暂无评分
摘要
A key goal of cognitive computing is to correctly model human language. Recently, much has been made of the ability of deep neural nets trained on huge datasets to precisely parse sentences. But do these systems truly incorporate human knowledge of language? In this paper we apply a standard linguistic methodology, transformational analysis, to determine whether this claim is accurate. On this view, if a deep net parser operates properly on one kind of sentence, it should also work correctly on its transformed counterpart. Applying this to a standard set of statement-question transformed sentence pairs, we find that a state of the art neural network system does not replicate human behavior and makes numerous errors. We suggest that this kind of test is more relevant for highlighting what deep neural networks can and cannot do with respect to human language.
更多
查看译文
关键词
Computational Linguistics,Natural Language Processing,Universal Dependencies,Parsing,Productivity,Generalization,Structure Dependence of Rules,Transformational Analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要