Code-Switched Language Identification is Harder Than You Think
CoRR(2024)
摘要
Code switching (CS) is a very common phenomenon in written and spoken
communication but one that is handled poorly by many natural language
processing applications. Looking to the application of building CS corpora, we
explore CS language identification (LID) for corpus building. We make the task
more realistic by scaling it to more languages and considering models with
simpler architectures for faster inference. We also reformulate the task as a
sentence-level multi-label tagging problem to make it more tractable. Having
defined the task, we investigate three reasonable models for this task and
define metrics which better reflect desired performance. We present empirical
evidence that no current approach is adequate and finally provide
recommendations for future work in this area.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要