A Review on the Use of Large Language Models as Virtual Tutors
This paper is a systematic review of large language models (LLMs) used as virtual tutors in education, examining applications including automatic question generation, student assessment, feedback provision, and personalized learning across K-12 and higher education contexts. The review analyzes how LLMs like GPT-3 and BERT are deployed for educational tasks such as generating explanations, grading answers, and providing tutoring support.
Transformer architectures contribute to managing long-term dependencies for natural language processing, representing one of the most recent changes in the field. These architectures are the basis of the innovative, cutting-edge large language models (LLMs) that have produced a huge buzz in several fields and industrial sectors, among the ones education stands out. Accordingly, these generative artificial intelligence-based solutions have directed the change in techniques and the evolution in ed