Detection of AI-Generated Text Using Large Language Model

2024 International Conference on Emerging Systems and Intelligent Computing (ESIC)(2024)

Cited 0|Views10
No score
Abstract
A large language model (LLM) is a trained deep-learning model that understands and generates text in a human-like fashion. Due to the significant advancements of LLM, it becomes a challenging task to distinguish human-written content from artificial intelligence (AI) generated content. In this work, we leverage the machine learning (ML) models to reliably identify whether an essay is authored by a human being or by an LLM. Concerns about LLMs replacing human tasks, especially in education persist. However, optimism remains for their potential as tools to enhance writing skills. An academic worry is LLMs facilitating plagiarism due to their extensive training in text and code datasets. Using diverse texts and unknown generative models, we replicate typical scenarios to encourage feature learning across models. In a study involving human subjects, we demonstrate that the annotation scheme offered by generative textual likelihood ratio (GLTR) enhances the human detection rate of fake text from 74% to 99% without requiring any previous training. GLTR is open source and publicly deployed, already finding widespread use in detecting generated outputs.
More
Translated text
Key words
LLM,AI,Machine Learning,ChatGPT,text detection
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined