Literaturnachweis - Detailanzeige
Autor/inn/en | McNamara, Danielle S.; Crossley, Scott A.; Roscoe, Rod D.; Allen, Laura K.; Dai, Jianmin |
---|---|
Titel | A Hierarchical Classification Approach to Automated Essay Scoring |
Quelle | 23 (2015), S.35-59 (25 Seiten)Infoseite zur Zeitschrift
PDF als Volltext (1); PDF als Volltext (2) |
Zusatzinformation | Weitere Informationen |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1075-2935 |
DOI | 10.1016/j.asw.2014.09.002 |
Schlagwörter | Automation; Scoring; Essays; Persuasive Discourse; Grade 9; Grade 11; College Freshmen; Time; Accuracy; Feedback (Response); Word Frequency; Classification; Rating Scales; Multivariate Analysis; Vocabulary Bewertung; Essay; Aufsatzunterricht; Persuasion; Persuasive Kommunikation; School year 09; 9. Schuljahr; Schuljahr 09; School year 11; 11. Schuljahr; Schuljahr 11; Studienanfänger; Zeit; Word analysis; Frequency; Wortanalyse; Häufigkeit; Classification system; Klassifikation; Klassifikationssystem; Rating-Skala; Multivariate Analyse; Wortschatz |
Abstract | This study evaluates the use of a hierarchical classification approach to automated assessment of essays. Automated essay scoring (AES) generally relies onmachine learning techniques that compute essay scores using a set of text variables. Unlike previous studies that rely on regression models, this study computes essay scores using a hierarchical approach, analogous to an incremental algorithm for hierarchical classification. The corpus in this study consists of 1243 argumentative (persuasive) essays written on 14 different prompts, across 3 different grade levels (9th grade, 11th grade, college freshman), and four different time limits for writing or temporal conditions (untimed essays and essays written in 10, 15, and 25 minute increments). The features included in the analysis are computed using the automated tools, Coh-Metrix, the Writing Assessment Tool (WAT), and Linguistic Inquiry and Word Count (LIWC). Overall, the models developed to score all the essays in the data set report 55% exact accuracy and 92% adjacent accuracy between the predicted essay scores and the human scores. The results indicate that this is a promising approach to AES that could provide more specific feedback to writers and may be relevant to other natural language computations, such as the scoring of short answers in comprehension or knowledge assessments. (As Provided). |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2020/1/01 |