LLMs and Childhood Safety: Identifying Risks and Proposing a Protection Framework for Safe Child-LLM Interaction

Research / Other Relevance: 7/10 7 cited 2025 paper

This paper conducts a systematic literature review of safety risks when children interact with LLMs, identifying concerns around harmful content, bias, developmental inappropriateness, and adversarial attacks, and proposes a protection framework with measurable evaluation targets for child-safe LLM deployment.

Large Language Models (LLMs) are increasingly embedded in child-facing contexts such as education, companionship, creative tools, but their deployment raises safety, privacy, developmental, and security risks. We conduct a systematic literature review of child-LLM interaction risks and organize findings into a structured map that separates (i) parent-reported concerns, (ii) empirically documented harms, and (iii) gaps between perceived and observed risk. Moving beyond descriptive listing, we com

Study Type

Research / Other

Framework Categories

Tool Types

AI Tutors 1-to-1 conversational tutoring systems.

Tags

large language model evaluation educationcomputer-science