Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enYunjiu, Luo; Wei, Wei; Zheng, Ying
TitelArtificial Intelligence-Generated and Human Expert-Designed Vocabulary Tests: A Comparative Study
QuelleIn: SAGE Open, 12 (2022) 1, (12 Seiten)Infoseite zur Zeitschrift
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Yunjiu, Luo)
ORCID (Wei, Wei)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
DOI10.1177/21582440221082130
SchlagwörterChinese; Vocabulary Development; Artificial Intelligence; Undergraduate Students; Language Tests; Test Preparation; Protocol Analysis; Specialists; Test Items; Difficulty Level; Item Response Theory; Semantics; Second Language Learning; Second Language Instruction; Test Construction; Phrase Structure; Comparative Analysis; Scores; Student Attitudes; Memorization; Construct Validity; Foreign Countries; China (Beijing)
AbstractArtificial intelligence (AI) technologies have the potential to reduce the workload for the second language (L2) teachers and test developers. We propose two AI distractor-generating methods for creating Chinese vocabulary items: semantic similarity and visual similarity. Semantic similarity refers to antonyms and synonyms, while visual similarity refers to the phenomenon that two phrases share one or more characters in common. This study explores the construct validity of the two types of selected-response vocabulary tests (AI-generated items and human expert-designed items) and compares their item difficulty and item discrimination. Both quantitative and qualitative data were collected. Seventy-eight students from Beijing Language and Culture University were asked to respond to AI-generated and human expert-designed items respectively. Students' scores were analyzed using the two-parameter item response theory (2PL-IRT) model. Thirteen students were then invited to report their test taking strategies in the think-aloud section. The findings from the students' item responses revealed that the human expert-designed items were easier but had more discriminating power than the AI-generated items. The results of think-aloud data indicated that the AI-generated items and expert-designed items might assess different constructs, in which the former elicited test takers' bottom-up test-taking strategies while the latter seemed more likely to trigger test takers' rote memorization ability. (As Provided).
AnmerkungenSAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: https://sagepub.com
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: