Next-Step Hint Generation for Introductory Programming Using Large Language Models

Relevance: 7/10 52 cited 2023 paper

This paper develops and evaluates the StAP-tutor, an LLM-based system that generates next-step hints for introductory Python programming exercises, comparing different prompt engineering approaches and assessing hint quality through student experiments and expert evaluation. The work focuses on creating pedagogically effective automated feedback that provides specific guidance without revealing complete solutions.

Large Language Models possess skills such as answering questions, writing essays or solving programming exercises. Since these models are easily accessible, researchers have investigated their capabilities and risks for programming education. This work explores how LLMs can contribute to programming education by supporting students with automated next-step hints. We investigate prompt practices that lead to effective next-step hints and use these insights to build our StAP-tutor. We evaluate thi

Tool Types

AI Tutors 1-to-1 conversational tutoring systems.

Tags

large language model evaluation educationcomputer-science