LCES: Zero-shot Automated Essay Scoring via Pairwise Comparisons Using Large Language Models

Relevance: 7/10 4 cited 2025 paper

This paper proposes LCES, a zero-shot automated essay scoring method that uses large language models to perform pairwise comparisons between essays and aggregates these comparisons into continuous scores using RankNet, avoiding the need for prompt-specific training data.

Recent advances in large language models (LLMs) have enabled zero-shot automated essay scoring (AES), providing a promising way to reduce the cost and effort of essay scoring in comparison with manual grading. However, most existing zero-shot approaches rely on LLMs to directly generate absolute scores, which often diverge from human evaluations owing to model biases and inconsistent scoring. To address these limitations, we propose LLM-based Comparative Essay Scoring (LCES), a method that formu

Framework Categories

Tool Types

Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

automated essay scoring evaluationcomputer-science