Generating AI Literacy MCQs: A Multi-Agent LLM Approach

Relevance: 7/10 7 cited 2024 paper

This paper presents a multi-agent LLM system for automatically generating multiple-choice questions (MCQs) to assess AI literacy in K-12 students (grades 7-9), using critique agents to ensure questions align with learning objectives, grade levels, and Bloom's Taxonomy. Three K-12 AI literacy teaching experts evaluated 40 generated questions using a pedagogical quality rubric.

Artificial intelligence (AI) is transforming society, making it crucial to prepare the next generation through AI literacy in K-12 education. However, scalable and reliable AI literacy materials and assessment resources are lacking. To address this gap, our study presents a novel approach to generating multiple-choice questions (MCQs) for AI literacy assessments. Our method utilizes large language models (LLMs) to automatically generate scalable, high-quality assessment questions. These question

Tool Types

Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

LLM evaluation K-12 educationcomputer-science