Literaturnachweis - Detailanzeige
Autor/inn/en | Kaya, Elif; O'Grady, Stefan; Kalender, Ilker |
---|---|
Titel | IRT-Based Classification Analysis of an English Language Reading Proficiency Subtest |
Quelle | In: Language Testing, 39 (2022) 4, S.541-566 (26 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Kaya, Elif) ORCID (O'Grady, Stefan) ORCID (Kalender, Ilker) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 0265-5322 |
DOI | 10.1177/02655322211068847 |
Schlagwörter | Item Response Theory; Test Items; Language Tests; Classification; Content Analysis; Language Proficiency; Computer Assisted Testing; Test Format; Comparative Analysis; Cutting Scores; Construct Validity; English (Second Language); Second Language Learning; Accuracy; Reading Tests; Foreign Countries; Language of Instruction; College Entrance Examinations; Graduate Students; Undergraduate Students; College Preparation; Item Analysis; Turkey (Ankara) Item-Response-Theorie; Test content; Testaufgabe; Language test; Sprachtest; Classification system; Klassifikation; Klassifikationssystem; Inhaltsanalyse; Language skill; Language skills; Sprachkompetenz; Testentwicklung; English as second language; English; Second Language; Englisch als Zweitsprache; Zweitsprachenerwerb; Lesetest; Ausland; Teaching language; Unterrichtssprache; Aufnahmeprüfung; Graduate Study; Student; Students; Aufbaustudium; Graduiertenstudium; Hauptstudium; Studentin; Itemanalyse |
Abstract | Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive testing (CAT). Using real data simulations, the current study investigated the classification performance of CAT on the reading section of an English language proficiency test and made comparisons with the paper-based version of the same test. Classification analysis was carried out to estimate classification accuracy (CA) and classification consistency (CC) by applying different locations and numbers of cutoff points. The results showed that classification was suitable when a single cutoff score was used, particularly for high- and low-ability test takers. Classification performance declined significantly when multiple cutoff points were simultaneously employed. Content analysis also raised important questions about construct coverage in CAT. The results highlight the potential for CAT to serve classification purposes and outline avenues for further research. (As Provided). |
Anmerkungen | SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: https://sagepub.com |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |