Concern Landscape Summary

Metacognition & Self-regulation

Whether AI tools help or hinder learners’ ability to monitor their own understanding and self-regulate.

How this was produced: We searched our corpus of high-relevance papers (scored ≥7/10) for keyword matches related to this concern theme, extracted key sections from each matched paper, then used Claude to synthesise what the literature says about this risk — including evidence for and against, gaps in measurement, and recommendations.

All concerns

The research literature reveals a critical tension: while AI tools can theoretically support metacognitive development through scaffolding and feedback, their current implementations often undermine self-regulated learning processes. Multiple empirical studies document 'cognitive offloading' effects where students delegate thinking to AI systems, resulting in reduced reflection, diminished need for understanding, and weaker strategic thinking. Students show consistent over-reliance on AI-generated outputs even when they possess sufficient knowledge to solve problems independently, with some studies finding acceptance rates of incorrect AI suggestions as high as 52.1%. This over-reliance is particularly pronounced among students with higher technophilic traits and those who trust AI systems more, creating a paradoxical situation where the students most enthusiastic about technology may be most vulnerable to its cognitive costs.

However, carefully designed AI systems that explicitly target metacognitive processes show promise. Systems incorporating Socratic questioning, error-based learning through erroneous examples, explicit self-assessment prompts, and adaptive scaffolding within students' Zone of Proximal Development demonstrate significant improvements in metacognitive awareness and self-regulatory behaviors. The key distinction appears to be between AI tools designed to provide answers (which promote passivity) versus those designed to prompt thinking (which develop agency). Effective implementations combine AI capabilities with human oversight, use structured pedagogical frameworks, and deliberately require cognitive effort from students rather than minimizing it. The evidence suggests that metacognitive outcomes depend critically on design choices: the same underlying technology can either enhance or erode self-regulated learning depending on how it is implemented and pedagogically framed.

The literature reveals a complex relationship between AI/LLM tools and students' metacognitive and self-regulatory capabilities. While some studies demonstrate AI systems explicitly designed to support self-regulated learning (SRL)—through features like adaptive scaffolding, personalized feedback, and metacognitive prompting—other research raises concerns about cognitive offloading and 'metacognitive laziness.' The most direct evidence comes from studies examining AI tutoring systems, intelligent learning environments, and generative AI tools in educational contexts. Key findings suggest that AI can support metacognition when intentionally designed with scaffolding for planning, monitoring, and reflection, but passive or over-reliant use may reduce students' engagement with these critical self-regulatory processes. The concern is particularly salient with generative AI tools like ChatGPT, where students may bypass effortful metacognitive processes by accepting AI-generated solutions without critical evaluation or self-assessment. However, the evidence base remains limited, with most studies focusing on system design rather than longitudinal impacts on metacognitive development.

Critically, the literature distinguishes between AI systems that 'do metacognition for students' versus those that 'support students doing metacognition.' Systems incorporating explicit metacognitive scaffolds—such as prompts for self-explanation, reflection, goal-setting, and progress monitoring—show more promise for supporting rather than replacing metacognitive engagement. Several papers highlight the importance of maintaining 'desirable difficulties' and ensuring students retain agency over their learning process. The risk appears highest when AI provides complete solutions or takes over regulatory functions without requiring student engagement in planning, monitoring, or evaluative processes. Context factors matter significantly: younger learners, novices in a domain, and students with weaker self-regulatory skills may be more vulnerable to over-reliance. The integration of AI into formative assessment and the design of 'AI-aware' learning activities emerge as critical areas requiring further research and careful instructional design.