KidLM: Advancing Language Models for Children – Early Insights and Future Directions
This paper introduces KidLM, a language model specifically designed for children through a novel data collection pipeline and training objective (Stratified Masking). The model is evaluated on its ability to understand lower grade-level text, avoid stereotypes, and capture children's unique preferences using both automated metrics and human evaluation.
Recent studies highlight the potential of large language models in creating educational tools for children, yet significant challenges remain in maintaining key child-specific properties such as linguistic nuances, cognitive needs, and safety standards. In this paper, we explore foundational steps toward the development of child-specific language models, emphasizing the necessity of high-quality pre-training data. We introduce a novel user-centric data collection pipeline that involves gathering