Literaturnachweis - Detailanzeige
Autor/in | James, Cindy L. |
---|---|
Titel | Electronic Scoring of Essays: Does Topic Matter? |
Quelle | In: Assessing Writing, 13 (2008) 2, S.80-92 (13 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1075-2935 |
DOI | 10.1016/j.asw.2008.05.001 |
Schlagwörter | Predictive Validity; Scoring; Electronic Equipment; Essays; Writing Skills; Computer Assisted Testing |
Abstract | The scoring of student essays by computer has generated much debate and subsequent research. The majority of the research thus far has focused on validating the automated scoring tools by comparing the electronic scores to human scores of writing or other measures of writing skills, and exploring the predictive validity of the automated scores. However, very little research has investigated possible effects of the essay prompts. This study endeavoured to do so by exploring test scores for three different prompts for the "ACCUPLACER"[R] "WritePlacer"[R] "Plus" test which is scored by the "IntelliMetric"[R] automated scoring system. The results indicated that there was no significant difference among the prompts overall; among males, between males and females, by native language or in comparison to scores generated by human raters. However, there was a significant difference in mean scores by topic for females. (Contains 7 tables.) (As Provided). |
Anmerkungen | Elsevier. 6277 Sea Harbor Drive, Orlando, FL 32887-4800. Tel: 877-839-7126; Tel: 407-345-4020; Fax: 407-363-1354; e-mail: usjcs@elsevier.com; Web site: http://www.elsevier.com |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2017/4/10 |