Partnering with AI: A Pedagogical Feedback System for LLM Integration into Programming Education

Relevance: 9/10 2 cited 2025 paper

This paper develops and evaluates a pedagogical framework for LLM-driven feedback generation in secondary school Python programming education, aligning automated feedback with established pedagogical principles like mastery adaptation and progress-based scaffolding. Through mixed-method evaluation with eight secondary school computer science teachers, the study assesses how well LLM-generated feedback adheres to pedagogical standards compared to human teacher feedback.

Feedback is one of the most crucial components to facilitate effective learning. With the rise of large language models (LLMs) in recent years, research in programming education has increasingly focused on automated feedback generation to help teachers provide timely support to every student. However, prior studies often overlook key pedagogical principles, such as mastery and progress adaptation, that shape effective feedback strategies. This paper introduces a novel pedagogical framework for L

Tool Types

AI Tutors 1-to-1 conversational tutoring systems.
Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

secondary school AI evaluationcomputer-science