Do LLMs Make Mistakes Like Students? Exploring Natural Alignment between Language Models and Human Error Patterns

Research / Other Relevance: 7/10 4 cited 2025 paper

This paper investigates whether LLMs' generation probabilities and error patterns align with student misconception patterns in multiple-choice questions, using 3,202 MCQs with real student response data across LLaMA and Qwen models (0.5B-72B parameters). The study finds moderate correlations between LLM-assigned probabilities and student distractor selections, and shows LLMs tend to select the same incorrect answers that commonly mislead students.

Large Language Models (LLMs) have demonstrated remarkable capabilities in various educational tasks, yet their alignment with human learning patterns, particularly in predicting which incorrect options students are most likely to select in multiple-choice questions (MCQs), remains underexplored. Our work investigates the relationship between LLM generation likelihood and student response distributions in MCQs with a specific focus on distractor selections. We collect a comprehensive dataset of M

Study Type

Research / Other

Tool Types

Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

educational assessment natural language processingcomputer-science