Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enZhou, Guojing; Azizsoltani, Hamoon; Ausin, Markel Sanz; Barnes, Tiffany; Chi, Min
TitelLeveraging Granularity: Hierarchical Reinforcement Learning for Pedagogical Policy Induction
QuelleIn: International Journal of Artificial Intelligence in Education, 32 (2022) 2, S.454-500 (47 Seiten)Infoseite zur Zeitschrift
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Zhou, Guojing)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN1560-4292
DOI10.1007/s40593-021-00269-9
SchlagwörterElectronic Learning; Intelligent Tutoring Systems; Decision Making; Problem Solving; Reinforcement; Educational Policy
AbstractIn interactive e-learning environments such as Intelligent Tutoring Systems, pedagogical decisions can be made at different levels of granularity. In this work, we focus on making decisions at "two levels": whole problems vs. single steps and explore three types of granularity: "problem-level only" ("Prob-Only"), "step-level only" ("Step-Only") and "both problem and step levels" ("Both"). More specifically, for Prob-Only, our pedagogical agency decides whether the next problem should be a worked example (WE) or a problem-solving (PS). In WEs, students observe how the tutor solves a problem while in PSs students solve the problem themselves. For Step-Only, the agent decides whether to elicit the student's next solution step or to tell the step directly. Here the student and the tutor "co-construct" the solution and we refer to this type of task as collaborative problem-solving (CPS). For Both, the agency first decides whether the next problem should be a WE, a PS, or a CPS and based on the problem-level decision, the agent then makes step-level decisions on whether to elicit or tell each step. In a series of classroom studies, we compare the three types of granularity under random yet reasonable pedagogical decisions. Results showed that while Prob-Only may be less effective for High students, Step-Only may be less effective for Low ones, Both can be effective for both High and Low students. Motivated by these findings, we propose and apply an offline, off-policy Gaussian Processes based "Hierarchical Reinforcement Learning" ("HRL") framework to induce a "hierarchical pedagogical policy" that makes adaptive, effective decisions at both the problem and step levels. In an empirical classroom study, our results showed that the HRL policy is significantly more effective than a Deep Q-Network (DQN) induced step-level policy and a random "yet reasonable" step-level baseline policy. (As Provided).
AnmerkungenSpringer. Available from: Springer Nature. One New York Plaza, Suite 4600, New York, NY 10004. Tel: 800-777-4643; Tel: 212-460-1500; Fax: 212-460-1700; e-mail: customerservice@springernature.com; Web site: https://link.springer.com/
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "International Journal of Artificial Intelligence in Education" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: