Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enHan, Chao; Lu, Xiaolei
TitelCan Automated Machine Translation Evaluation Metrics Be Used to Assess Students' Interpretation in the Language Learning Classroom?
QuelleIn: Computer Assisted Language Learning, 36 (2023) 5-6, S.1064-1087 (24 Seiten)
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Han, Chao)
ORCID (Lu, Xiaolei)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN0958-8221
DOI10.1080/09588221.2021.1968915
SchlagwörterTranslation; Computational Linguistics; Correlation; Language Processing; Second Languages; Language Usage; Cultural Awareness; Intercultural Communication; Evaluation Methods; Computer Software; Student Evaluation; Artificial Intelligence; Scoring; Evaluators; Second Language Learning; Second Language Instruction; Bilingualism; Chinese; English (Second Language); Foreign Countries; Undergraduate Students; Majors (Students); Language Tests; China
AbstractThe use of translation and interpreting (T&I) in the language learning classroom is commonplace, serving various pedagogical and assessment purposes. Previous utilization of T&I exercises is driven largely by their potential to enhance language learning, whereas the latest trend has begun to underscore T&I as a crucial skill to be acquired as part of transcultural competence for language learners and future language users. Despite their growing popularity and utility in the language learning classroom, assessing T&I is time-consuming, labor-intensive and cognitively taxing for human raters (e.g., language teachers), primarily because T&I assessment entails meticulous evaluation of informational equivalence between the source-language message and target-language renditions. One possible solution is to rely on automated quality metrics that are originally developed to evaluate machine translation (MT). In the current study, we investigated the viability of using four automated MT evaluation metrics, BLEU, NIST, METEOR and TER, to assess human interpretation. Essentially, we correlated the automated metric scores with the human-assigned scores (i.e., the criterion measure) from multiple assessment scenarios to examine the degree of "machine-human parity." Overall, we observed fairly strong metric-human correlations for BLEU (Pearson's r = 0.670), NIST (r = 0.673) and METEOR (r = 0.882), especially when the metric computation was conducted on the sentence level rather than the text level. We discussed these emerging findings and others in relation to the feasibility of operationalizing MT metrics to evaluate students' interpretation in the language learning classroom. (As Provided).
AnmerkungenRoutledge. Available from: Taylor & Francis, Ltd. 530 Walnut Street Suite 850, Philadelphia, PA 19106. Tel: 800-354-1420; Tel: 215-625-8900; Fax: 215-207-0050; Web site: http://www.tandf.co.uk/journals
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Computer Assisted Language Learning" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: