LLMs and Childhood Safety: Identifying Risks and Proposing a Protection Framework for Safe Child-LLM Interaction
This paper conducts a systematic literature review of safety, privacy, developmental, and security risks associated with children's interactions with LLMs, and proposes a protection framework with measurable evaluation targets including content safety, age-appropriate readability, bias checks, and prompt-injection robustness. The framework is designed to guide developers, educators, and policymakers in assessing child-safe LLM deployments in educational and companion contexts.
Large Language Models (LLMs) are increasingly embedded in child-facing contexts such as education, companionship, creative tools, but their deployment raises safety, privacy, developmental, and security risks. We conduct a systematic literature review of child-LLM interaction risks and organize findings into a structured map that separates (i) parent-reported concerns, (ii) empirically documented harms, and (iii) gaps between perceived and observed risk. Moving beyond descriptive listing, we com