LLMs and Childhood Safety: Identifying Risks and Proposing a Protection Framework for Safe Child-LLM Interaction
This paper conducts a systematic literature review of safety risks when children interact with LLMs, identifying concerns around harmful content, bias, developmental inappropriateness, and adversarial attacks, and proposes a protection framework with measurable evaluation targets for child-safe LLM deployment.
Large Language Models (LLMs) are increasingly embedded in child-facing contexts such as education, companionship, creative tools, but their deployment raises safety, privacy, developmental, and security risks. We conduct a systematic literature review of child-LLM interaction risks and organize findings into a structured map that separates (i) parent-reported concerns, (ii) empirically documented harms, and (iii) gaps between perceived and observed risk. Moving beyond descriptive listing, we com