Human or AI? Comparing Design Thinking Assessments by Teaching Assistants and Bots

Relevance: 7/10 2025 paper

This paper compares AI-generated assessment scores with Teaching Assistant scores for evaluating design thinking posters created by 13-14 year-old secondary students in Singapore, examining agreement across dimensions like empathy, pain points, and visual communication. The study investigates teacher preferences for AI versus human grading and explores hybrid assessment models for scalable evaluation in design thinking education.

As design thinking education is growing in secondary and tertiary education, educators face a mounting challenge of evaluating creative artefacts that comprise visual and textual elements. Traditional, rubric-based methods of assessment are laborious, time-consuming, and inconsistent, due to their reliance on Teaching Assistants (TAs) in large, multi-section cohorts. This paper presents an exploratory study to investigate the reliability and perceived accuracy of AI-assisted assessment vis-à-vis

Tool Types

Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

secondary school AI evaluationcomputer-science