Chrome Extension
WeChat Mini Program
Use on ChatGLM

Two-Layer Context-Enhanced Representation for Better Chinese Discourse Parsing

NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT I(2022)

Cited 0|Views9
No score
Abstract
As a fundamental task of Natural Language Processing (NLP), discourse parsing has attracted more and more attention in the past decade. Previous studies mainly focus on tree construction, while for the EDU representation, most researchers just conduct a simple flat word-level representation. Structural information within EDU and relationships between EDUs, especially between non-adjacent EDUs, are largely ignored. In this paper, we propose a two-layer enhanced representation approach to better model the context of EDUs. For the bottom layer (i.e., intra-EDU), we use Graph Convolutional Network (GCN) to continuously update the representation of words according to existing dependency paths from the root to the leaves. For the upper layer (i.e., inter-EDU), we use Star-Transformer to connect non-adjacent EDUs by the delay node and thus incorporate global information. Experimental results on the CDTB corpus show that the proposed two-layer contextenhanced representation can contribute much to Chinese discourse parsing in neural architecture.
More
Translated text
Key words
Discourse parsing, GCN, Star-transformer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined