Language Bottleneck Models for Qualitative Knowledge State Modeling

Benchmark (Published & Automated) Relevance: 7/10 2 cited 2025 paper

This paper introduces Language Bottleneck Models (LBMs) that use LLMs to generate interpretable natural language summaries of student knowledge states for predicting future performance, validated on synthetic and real-world datasets. The approach goes beyond traditional Cognitive Diagnosis and Knowledge Tracing by producing qualitative knowledge state descriptions that can capture nuanced insights like misconceptions.

Accurately assessing student knowledge is central to education. Cognitive Diagnosis (CD) models estimate student proficiency at a fixed point in time, while Knowledge Tracing (KT) methods model evolving knowledge states to predict future performance. However, existing approaches either provide quantitative concept mastery estimates with limited expressivity (CD, probabilistic KT) or prioritize predictive accuracy at the cost of interpretability (deep learning KT). We propose Language Bottleneck

Study Type

Benchmark (Published & Automated)

Tool Types

AI Tutors 1-to-1 conversational tutoring systems.
Personalised Adaptive Learning Systems that adapt content and difficulty to individual learners.

Tags

knowledge tracing student modelcomputer-science