Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enMatta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A.
TitelImplications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Quelle(2023), (42 Seiten)
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Matta, Michael)
ORCID (Mercer, Sterett H.)
ORCID (Keller-Margulis, Milena A.)
Weitere Informationen
Spracheenglisch
Dokumenttypgedruckt; online; Monographie
SchlagwörterBias; Automation; Writing Evaluation; Scoring; Writing Tests; Elementary School Students; Middle School Students; Grade 4; Grade 7; Predictive Validity; Essays
AbstractRecent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to implementing unfair practices with negative consequences on student learning. The goal of this study was to investigate score bias of writeAlizer, a free and open-source automated writing evaluation program. For 421 students in grades 4 and 7 who completed a state writing exam that included composition and multiple-choice revising and editing questions, writeAlizer was used to generate automated writing quality scores for the composition section. Then, we used multiple regression models to investigate whether writeAlizer scores demonstrated differential predictions of the composition and overall scores on the state-mandated writing exam for students from different racial or ethnic groups. No evidence of bias for automated scores was observed. However, after controlling for automated scores in grade 4, we found statistically significant group differences in regression models predicting overall state test scores three years later but not the essay composition scores. We hypothesize that the multiple-choice revising and editing sections, rather than the scoring approach used for the essay portion, introduced construct-irrelevant variance and might lead to differential performance among groups. Implications for assessment development and score use are discussed. [This paper was published in "School Psychology" v38 n3 p173-181 2023.] (As Provided).
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Da keine ISBN zur Verfügung steht, konnte leider kein (weiterer) URL generiert werden.
Bitte rufen Sie die Eingabemaske des Karlsruher Virtuellen Katalogs (KVK) auf
Dort haben Sie die Möglichkeit, in zahlreichen Bibliothekskatalogen selbst zu recherchieren.
Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: