Exploring LLM Prompting Strategies for Joint Essay Scoring and Feedback Generation

Relevance: 7/10 71 cited 2024 paper

This paper explores prompting strategies for large language models to jointly perform automated essay scoring and generate individualized feedback for student essays, evaluating both the scoring accuracy and helpfulness of generated feedback. The work uses the ASAP dataset of student essays and compares zero-shot and few-shot prompting approaches for providing explanatory feedback to help students improve their writing.

Individual feedback can help students improve their essay writing skills. However, the manual effort required to provide such feedback limits individualization in practice. Automatically-generated essay feedback may serve as an alternative to guide students at their own pace, convenience, and desired frequency. Large language models (LLMs) have demonstrated strong performance in generating coherent and contextually relevant text. Yet, their ability to provide helpful essay feedback is unclear. T

Tool Types

Teacher Support Tools Tools that assist teachers — lesson planning, content generation, grading, analytics.

Tags

automated essay scoring evaluationcomputer-science