How Real Is AI Tutoring? Comparing Simulated and Human Dialogues in One-on-One Instruction
This paper systematically compares AI-simulated tutoring dialogues with authentic human teacher-student dialogues using IRF coding and Epistemic Network Analysis, finding that human dialogues show superior pedagogical questioning, feedback, and cognitively-guided interaction patterns, while AI dialogues exhibit structural simplification and behavioral convergence. The work directly evaluates the pedagogical quality and instructional effectiveness of LLM-generated one-on-one tutoring interactions.
Heuristic and scaffolded teacher-student dialogues are widely regarded as critical for fostering students'higher-order thinking and deep learning. However, large language models (LLMs) currently face challenges in generating pedagogically rich interactions. This study systematically investigates the structural and behavioral differences between AI-simulated and authentic human tutoring dialogues. We conducted a quantitative comparison using an Initiation-Response-Feedback (IRF) coding scheme and