Productive Struggle & Scaffolding
The balance between helpful AI scaffolding and over-scaffolding that removes the desirable difficulty learners need to grow.
How this was produced: We searched our corpus of high-relevance papers (scored ≥7/10) for keyword matches related to this concern theme, extracted key sections from each matched paper, then used Claude to synthesise what the literature says about this risk — including evidence for and against, gaps in measurement, and recommendations.
The research literature reveals a fundamental tension in AI-assisted learning: while LLMs can provide sophisticated scaffolding and personalized support, they risk eliminating the 'desirable difficulties' essential for deep learning. Multiple studies document how AI systems can short-circuit cognitive engagement by providing answers too readily, removing productive struggle that builds genuine understanding. This concern is particularly acute in K-12 education where students are developing foundational cognitive habits. The evidence shows that AI tutors consistently struggle to calibrate support appropriately—they tend to either provide insufficient guidance (leaving students frustrated) or excessive help (creating dependency and surface learning). Notably, high-quality human tutoring interactions outperform AI on measures of curiosity and deeper engagement, though AI shows advantages in consistency and availability. The central challenge is maintaining learners within their Zone of Proximal Development while preserving opportunities for productive struggle—a balance that current LLM systems achieve inconsistently. Studies across mathematics, programming, and language learning demonstrate that AI-generated hints and feedback often lack the adaptive fading of support that characterizes expert human tutoring, with systems providing either too much structure (bottom-out hints) or too little specificity.