Literaturnachweis - Detailanzeige
Autor/inn/en | Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi |
---|---|
Titel | Application of an Automated Essay Scoring Engine to English Writing Assessment Using Many-Facet Rasch Measurement |
Quelle | In: Language Testing, 40 (2023) 1, S.61-85 (25 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Chan, Kinnie Kin Yee) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 0265-5322 |
DOI | 10.1177/02655322221076025 |
Schlagwörter | Computer Assisted Testing; Essays; Scoring; Scores; Test Scoring Machines; Essay Tests; Writing Evaluation; Elementary School Students; Secondary School Students; Item Response Theory; Prompting; Difficulty Level; Evaluators; Grading |
Abstract | We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into hierarchically ordered grades, and the co-calibration of all essay scoring data in a single Rasch measurement framework. A total of 3453 essays were written by 589 US students (in Grades 4, 6, 8, 10, and 12), in response to 18 National Assessment of Educational Progress (NAEP) writing prompts at three grade levels (4, 8, & 12). We randomly assigned one of two versions of the assessment, A or B, to each student. Each version comprised a narrative (N), an informative (I), and a persuasive (P) prompt. Nineteen experienced assessors graded the essays holistically using NAEP scoring guidelines, using a rotating plan in which each essay was rated by four raters. Each essay was additionally scored using the IEA. We estimated the effects of rater, prompt, student, and rubric by using a Many-Facet Rasch Measurement (MFRM) model. Last, within a single Rasch measurement scale, we co-calibrated the students' grades from human raters and their grades from the IEA to compare them. The AES machine maintained equivalence with human scored ratings and were more consistent than those from human raters. (As Provided). |
Anmerkungen | SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: https://sagepub.com |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |