How critically can an AI think? A framework for evaluating the quality of thinking of generative artificial intelligence
arxiv(2024)
摘要
Generative AI such as those with large language models have created
opportunities for innovative assessment design practices. Due to recent
technological developments, there is a need to know the limits and capabilities
of generative AI in terms of simulating cognitive skills. Assessing student
critical thinking skills has been a feature of assessment for time immemorial,
but the demands of digital assessment create unique challenges for equity,
academic integrity and assessment authorship. Educators need a framework for
determining their assessments vulnerability to generative AI to inform
assessment design practices. This paper presents a framework that explores the
capabilities of the LLM ChatGPT4 application, which is the current industry
benchmark. This paper presents the Mapping of questions, AI vulnerability
testing, Grading, Evaluation (MAGE) framework to methodically critique their
assessments within their own disciplinary contexts. This critique will provide
specific and targeted indications of their questions vulnerabilities in terms
of the critical thinking skills. This can go on to form the basis of assessment
design for their tasks.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要