Toward LLM-Supported Automated Assessment of Critical Thinking Subskills

Relevance: 8/10 1 cited 2025 paper

This paper develops and evaluates automated methods using large language models (GPT-4, GPT-4-mini, and ModernBERT) to assess critical thinking subskills in student-written argumentative essays, comparing zero-shot prompting, few-shot prompting, and supervised fine-tuning approaches against human coding. The work focuses on measuring higher-order reasoning skills including understanding/analyzing information, evaluating evidence, making inferences, and articulating arguments.

Critical thinking represents a fundamental competency in today's education landscape. Developing critical thinking skills through timely assessment and feedback is crucial; however, there has not been extensive work in the learning analytics community on defining, measuring, and supporting critical thinking. In this paper, we investigate the feasibility of measuring core"subskills"that underlie critical thinking. We ground our work in an authentic task where students operationalize critical thin

Tool Types

Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

automated essay scoring evaluationcomputer-science