Disentangled Knowledge Tracing for Alleviating Cognitive Bias
This paper proposes DisKT (Disentangled Knowledge Tracing), a model that addresses cognitive bias in Intelligent Tutoring Systems by separately modeling students' familiar and unfamiliar abilities using causal inference, preventing cognitive overload for underperformers and underload for overperformers. The model uses contradiction attention mechanisms and Item Response Theory to improve prediction accuracy and interpretability across 11 benchmarks.
In the realm of Intelligent Tutoring System (ITS), the accurate assessment of students' knowledge states through Knowledge Tracing (KT) is crucial for personalized learning. However, due to data bias, i.e., the unbalanced distribution of question groups ( e.g., concepts), conventional KT models are plagued by cognitive bias, which tends to result in cognitive underload for overperformers and cognitive overload for underperformers. More seriously, this bias is amplified with the exercise recommen