Literaturnachweis - Detailanzeige
Autor/inn/en | Litman, Diane; Zhang, Haoran; Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine |
---|---|
Titel | A Fairness Evaluation of Automated Methods for Scoring Text Evidence Usage in Writing |
Quelle | (2021), (13 Seiten)
PDF als Volltext (1); PDF als Volltext (2) |
Zusatzinformation | Weitere Informationen |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Monographie |
Schlagwörter | Essays; Writing Evaluation; Models; Accuracy; Ethics; Comparative Analysis; Computer Software; Prediction; Computational Linguistics; Elementary School Students; Scores; Student Characteristics; Socioeconomic Status; Males; African American Students; Lunch Programs |
Abstract | Automated Essay Scoring (AES) can reliably grade essays at scale and reduce human effort in both classroom and commercial settings. There are currently three dominant supervised learning paradigms for building AES models: feature-based, neural, and hybrid. While feature-based models are more explainable, neural network models often outperform feature-based models in terms of prediction accuracy. To create models that are accurate and explainable, hybrid approaches combining neural network and feature-based models are of increasing interest. We compare these three types of AES models with respect to a different evaluation dimension, namely algorithmic fairness. We apply three definitions of AES fairness to an essay corpus scored by different types of AES systems with respect to upper elementary students' use of text evidence. Our results indicate that different AES models exhibit different types of biases, spanning students' gender, race, and socioeconomic status. We conclude with a step towards mitigating AES bias once detected. [This paper was published in: "AIED 2021," edited by I. Roll et al., Springer Nature Switzerland AG, 2021, pp. 255-67.] (As Provided). |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |