Generating AI Literacy MCQs: A Multi-Agent LLM Approach
This paper presents a multi-agent LLM system that automatically generates multiple-choice questions (MCQs) for K-12 AI literacy assessments, using critique agents to ensure questions align with learning objectives, grade levels, and Bloom's Taxonomy. The system was evaluated by three K-12 AI literacy teaching experts who assessed 40 generated questions using a quality rubric.
Artificial intelligence (AI) is transforming society, making it crucial to prepare the next generation through AI literacy in K-12 education. However, scalable and reliable AI literacy materials and assessment resources are lacking. To address this gap, our study presents a novel approach to generating multiple-choice questions (MCQs) for AI literacy assessments. Our method utilizes large language models (LLMs) to automatically generate scalable, high-quality assessment questions. These question