LLM Safety for Children

Relevance: 7/10 4 cited 2025 paper

This paper develops a comprehensive taxonomy of content harms specific to children interacting with LLMs, creates diverse child user models based on child psychology literature, and evaluates six state-of-the-art LLMs for safety gaps through red-teaming. The work focuses on identifying safety issues unique to children (ages below 18) that standard adult-focused safety evaluations miss.

This paper analyzes the safety of Large Language Models (LLMs) in interactions with children below age of 18 years. Despite the transformative applications of LLMs in various aspects of children's lives such as education and therapy, there remains a significant gap in understanding and mitigating potential content harms specific to this demographic. The study acknowledges the diverse nature of children often overlooked by standard safety evaluations and proposes a comprehensive approach to evalu

Framework Categories

Tool Types

AI Tutors 1-to-1 conversational tutoring systems.

Tags

safety evaluation language model childrencomputer-science