Literaturnachweis - Detailanzeige
Autor/inn/en | Ivan D. Mardini G.; Christian G. Quintero M.; C?sar A. Viloria N.; Winston S. Percybrooks B.; Heydy S. Robles N.; Karen Villalba R. |
---|---|
Titel | A Deep-Learning-Based Grading System (ASAG) for Reading Comprehension Assessment by Using Aphorisms as Open-Answer-Questions |
Quelle | In: Education and Information Technologies, 29 (2024) 4, S. 4565-4590Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Ivan D. Mardini G.) Als Datenquelle verlinkte Ressource |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1360-2357 |
DOI | 10.1007/s10639-023-11890-7 |
Schlagwörter | Forschungsbericht; Reading Comprehension; Reading Tests; Learning Strategies; Grading; Test Format; Figurative Language; Questioning Techniques; Undergraduate Students; Second Language Learning; Spanish; Technology Uses in Education; Automation; Educational Technology Leseverstehen; Lesetest; Learning methode; Learning techniques; Lernmethode; Lernstrategie; Notengebung; Schulnote; Testentwicklung; Befragungstechnik; Fragetechnik; Zweitsprachenerwerb; Spanisch; Technology enhanced learning; Technology aided learning; Technologieunterstütztes Lernen; Unterrichtsmedien |
Abstract | Today reading comprehension is considered an essential skill in modern life, therefore, higher education students require more specific skills to understand, interpret and evaluate texts effectively. Short answer questions (SAQs) are one of the relevant and proper tools for assessing reading comprehension skills. Unlike multiple-choice questions, SAQs allow for the assessment of cognitive abilities such as attention, language, perception, and problem solving. However, the task of SAQs scoring is time-consuming and susceptible to ambiguity. Automatic Short Answer Grading (ASAG) is a new paradigm that could help solve these problems. This experimental analysis aims to implement ASAG using several approaches to sentence embedding based on deep learning with a multilayer perceptron regression layer on the top, trained with a reading comprehension dataset based on aphorisms. For experimental testing, the available dataset is composed of answers given by 199 undergraduate students in Spanish. BERT and Skip-Thought models are tested with different hyperparameters to find the best performance in terms of Pearson correlation coefficient and RMSE against human experts grades. The result of the current study showed that BERT model performed better than other approaches. (As Provided). |
Anmerkungen | Springer. Available from: Springer Nature. One New York Plaza, Suite 4600, New York, NY 10004. Tel: 800-777-4643; Tel: 212-460-1500; Fax: 212-460-1700; e-mail: customerservice@springernature.com; Web site: https://link.springer.com/ |
Begutachtung | Peer reviewed |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2025/2/06 |