Reasoning Trajectories for Socratic Debugging of Student Code: From Misconceptions to Contradictions and Updated Beliefs
This paper introduces the task of generating reasoning trajectories for Socratic debugging conversations where AI tutors guide novice programmers to identify and correct their own programming misconceptions through cognitive dissonance rather than direct correction. The work includes a manually annotated dataset of debugging problems with reasoning trajectories and evaluates LLM-generated Socratic conversations.
In Socratic debugging, instructors guide students towards identifying and fixing a bug on their own, instead of providing the bug fix directly. Most novice programmer bugs are caused by programming misconceptions, namely false beliefs about a programming concept. In this context, Socratic debugging can be formulated as a guided Reasoning Trajectory (RT) leading to a statement about the program behavior that contradicts the bug-causing misconception. Upon reaching this statement, the ensuing cogn