Literaturnachweis - Detailanzeige
Autor/inn/en | Tack, Anaïs; Piech, Chris |
---|---|
Titel | The AI Teacher Test: Measuring the Pedagogical Ability of Blender and GPT-3 in Educational Dialogues [Konferenzbericht] Paper presented at the International Conference on Educational Data Mining (EDM) (15th, Durham, United Kingdom, Jul 24-27, 2022). |
Quelle | (2022), (8 Seiten)
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Monographie |
Schlagwörter | Artificial Intelligence; Dialogs (Language); Bayesian Statistics; Decision Making; Reliability; Models; Teacher Student Relationship; Intelligent Tutoring Systems; Test Construction; Evaluation Methods; Teaching Methods; Comparative Analysis; Computer Software; Task Analysis; Interrater Reliability; Scores; Computer Simulation Künstliche Intelligenz; Dialog; Dialogs; Dialogue; Dialogues; Decision-making; Entscheidungsfindung; Reliabilität; Analogiemodell; Teacher student relationships; Lehrer-Schüler-Beziehung; Intelligentes Tutorsystem; Testaufbau; Teaching method; Lehrmethode; Unterrichtsmethode; Aufgabenanalyse; Interrater-Reliabilität; Computergrafik; Computersimulation |
Abstract | How can we test whether state-of-the-art generative models, such as Blender and GPT-3, are good AI teachers, capable of replying to a student in an educational dialogue? Designing an AI teacher test is challenging: although evaluation methods are much-needed, there is no off-the-shelf solution to measuring pedagogical ability. This paper reports on a first attempt at an AI teacher test. We built a solution around the insight that you can run conversational agents in parallel to human teachers in real-world dialogues, simulate how different agents would respond to a student, and compare these counterpart responses in terms of three abilities: speak like a teacher, understand a student, help a student. Our method builds on the reliability of comparative judgments in education and uses a probabilistic model and Bayesian sampling to infer estimates of pedagogical ability. We find that, even though conversational agents (Blender in particular) perform well on conversational uptake, they are quantifiably worse than real teachers on several pedagogical dimensions, especially with regard to helpfulness (Blender: [delta] ability = -0.75; GPT-3: [delta] ability = -0.93). [For the full proceedings, see ED623995.] (As Provided). |
Anmerkungen | International Educational Data Mining Society. e-mail: admin@educationaldatamining.org; Web site: https://educationaldatamining.org/conferences/ |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |