Do LLMs Make Mistakes Like Students? Exploring Natural Alignment between Language Models and Human Error Patterns

Relevance: 8/10 4 cited 2025 paper

This paper investigates whether LLMs' generation probabilities and error patterns align with student misconception patterns in multiple-choice questions, using real-world student response data across 3,202 MCQs. The study finds moderate correlation between LLM probabilities and student distractor selections, with LLMs frequently choosing the same incorrect answers that commonly mislead students.

Large Language Models (LLMs) have demonstrated remarkable capabilities in various educational tasks, yet their alignment with human learning patterns, particularly in predicting which incorrect options students are most likely to select in multiple-choice questions (MCQs), remains underexplored. Our work investigates the relationship between LLM generation likelihood and student response distributions in MCQs with a specific focus on distractor selections. We collect a comprehensive dataset of M

Tool Types

Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

educational assessment natural language processingcomputer-science