Literaturnachweis - Detailanzeige
Autor/inn/en | Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. |
---|---|
Titel | Automated Summarization Evaluation (ASE) Using Natural Language Processing Tools |
Quelle | (2019), (14 Seiten)
PDF als Volltext (1); PDF als Volltext (2) |
Zusatzinformation | Weitere Informationen |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Monographie |
Schlagwörter | Automation; Writing Evaluation; Natural Language Processing; Artificial Intelligence; Accuracy; Classification; Scoring |
Abstract | Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However, these models often rely on features derived from expert ratings of student summarizations of specific source texts and are therefore not generalizable to summarizations of new texts. Further, many of the models rely of proprietary tools that are not freely or publicly available, rendering replications difficult. In this study, we introduce an automated summarization evaluation (ASE) model that depends strictly on features of the source text or the summary, allowing for a purely textbased model of quality. This model effectively classifies summaries as either low or high quality with an accuracy above 80%. Importantly, the model was developed on a large number of source texts allowing for generalizability across texts. Further, the features used in this study are freely and publicly available affording replication. [This paper was published in: S. Isotani et al. (Eds.), "AIED 2019" (pp. 84-95). Switzerland: Springer.] (As Provided). |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |