Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Weighted Flat Lattice Transformer-based Knowledge Extraction Architecture for Chinese Named Entity Recognition.

Hengwei Zhang,Yuejia Wu,Jian-Tao Zhou

International Conference on Computer Supported Cooperative Work in Design(2024)

Cited 0|Views0
No score
Abstract
Named Entity Recognition (NER) is one of the contents of Knowledge Extraction (KE) that transforms data into knowledge representation. However, Chinese NER faces the problem of lacking clear word boundaries that limit the effectiveness of the KE. Although the flat lattice Transformer (FLAT) framework, which converts lattice structure into a flat structure including a set of spans, can effectively improve this problem and obtain advanced results, there still exist the problems of insensitivity to entity importance weights and insufficient feature learning. This paper proposes a weighted flat lattice Transformer architecture for Chinese NER, namely WFLAT. The WFLAT first adds a weight matrix into self-attention calculation, which can obtain finer-grained partitioning of entities to improve experimental performance, and then adopts a multi-layer Transformer encoder with each layer using a multi-head self-attention mechanism. Extensive experimental results on benchmarks demonstrate that our proposed KE model can obtain state-of-the-art performance for the Chinese NER task.
More
Translated text
Key words
Knowledge Graph,Knowledge Extraction,Chinese Named Entity Recognition,Flat Lattice Framework,Transformer Architecture
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined