Distributional learning of recursive structures.

CogSci(2021)

引用 0|浏览0
暂无评分
摘要
Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, English allows infinite free embedding of the prenominal genitive -s, whereas the postnominal genitive of is largely restricted to only one level and to a limited set of items. Therefore, while the ability for recursion is considered as a crucial part of the language faculty, speakers need to learn from experience which specific structures allow free embedding and which do not. One effort to account for the mechanism that underlies this learning process, the distributional learning proposal, suggests that the recursion of a structure (e.g. X1’s-X2) is licensed if the X1 position and the X2 position are productively substitutable in the input. A series of corpus studies have confirmed the availability of such distributional cues in child directed speech. The present study further tests the distributional learning proposal with an artificial language learning experiment. We found that, as predicted, participants exposed to productive input were more likely to accept unattested strings at both one and two-embedding levels than participants exposed to unproductive input. Therefore, our results suggest that speakers can indeed use distributional information at one level to learn whether or not a structure is freely recursive.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要