Universal Automatic Short Answer Grading (ASAG) Model: A Comprehensive Approach
This paper introduces a Universal ASAG Model that combines multiple natural language processing techniques, including Sentence-BERT (SBERT), Transformer-based Attention, BERT, LSTMs, and BM25-based Term Weighting, and achieves state-of-the-art results.
Automated Short Answer Grading (ASAG) plays a crucial role in modern e-learning systems by ensuring the efficient, accurate, and consistent assessment of student responses in online education. However, many existing ASAG models struggle with generalization across different domains and question complexities, often facing challenges such as limited training data, high computational costs, and variations in the length of student answers (SA) relative to reference answers (RA). This paper introduces