Literaturnachweis - Detailanzeige
Autor/inn/en | Raczynski, Kevin; Cohen, Allan |
---|---|
Titel | Appraising the Scoring Performance of Automated Essay Scoring Systems--Some Additional Considerations: Which Essays? Which Human Raters? Which Scores? |
Quelle | In: Applied Measurement in Education, 31 (2018) 3, S.233-240 (8 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 0895-7347 |
DOI | 10.1080/08957347.2018.1464449 |
Schlagwörter | Essay Tests; Test Scoring Machines; Test Validity; Evaluators; Scoring Formulas; Interrater Reliability; Accuracy; Evaluation Methods; Grade 7; Measurement Techniques; Models; Expertise; Statistical Analysis |
Abstract | The literature on Automated Essay Scoring (AES) systems has provided useful validation frameworks for any assessment that includes AES scoring. Furthermore, evidence for the scoring fidelity of AES systems is accumulating. Yet questions remain when appraising the scoring performance of AES systems. These questions include: (a) which essays are used to calibrate and test AES systems; (b) which human raters provided the scores on these essays; and (c) given that multiple human raters are generally used for this purpose, which human scores should ultimately be used when there are score disagreements? This article provides commentary on the first two questions and an empirical investigation into the third question. The authors suggest that addressing these three questions strengthens the scoring component of the validity argument for any assessment that includes AES scoring. (As Provided). |
Anmerkungen | Routledge. Available from: Taylor & Francis, Ltd. 530 Walnut Street Suite 850, Philadelphia, PA 19106. Tel: 800-354-1420; Tel: 215-625-8900; Fax: 215-207-0050; Web site: http://www.tandf.co.uk/journals |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2020/1/01 |