Inclusivity-Exclusivity Inference Using Recurrent Neural Networks

semanticscholar(2020)

Cited 0|Views0
No score
Abstract
Though “or” in formal semantics and logic is often understood as a logical operator with a well-defined truth table, the exclusivity vs. inclusivity of “or” in natural language often fluctuates sentence-to-sentence and depends on subtle linguistic cues. This paper explores the ability of neural networks to predict the behavior of “or” across various sentence structures. We assess the performance of a biLSTM-based sentence encoder trained on an English dataset of human inference inclusivity-exclusivity ratings, experimenting with three different pre-trained word embedding models, with and without self-attention. We find the best-performing model to utilize BERT embeddings without attention, which exceeds expectations by predicting human inference ratings with r = 0.35.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined