Literaturnachweis - Detailanzeige
Autor/inn/en | Zhou, Guojing; Wang, Jianxun; Lynch, Collin F.; Chi, Min |
---|---|
Titel | Towards Closing the Loop: Bridging Machine-Induced Pedagogical Policies to Learning Theories [Konferenzbericht] Paper presented at the International Conference on Educational Data Mining (EDM) (10th, Wuhan, China, Jun 25-28, 2017). |
Quelle | (2017), (8 Seiten)
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Monographie |
Schlagwörter | Learning Theories; Teaching Methods; Decision Making; Intelligent Tutoring Systems; Classroom Research; Comparative Analysis; Markov Processes; Educational Policy; Grading; Evaluation Criteria; Homework; Mathematics Instruction; Data Analysis; College Students; Problem Solving; Pretests Posttests; North Carolina Learning theory; Lerntheorie; Teaching method; Lehrmethode; Unterrichtsmethode; Decision-making; Entscheidungsfindung; Intelligentes Tutorsystem; Markowscher Prozess; Politics of education; Bildungspolitik; Notengebung; Schulnote; Hausaufgabe; Mathematics lessons; Mathematikunterricht; Auswertung; Collegestudent; Problemlösen |
Abstract | In this study, we applied decision trees (DT) to extract a compact set of pedagogical decision-making rules from an original "full" set of 3,702 Reinforcement Learning (RL)- induced rules, referred to as the DT-RL rules and Full-RL rules respectively. We then evaluated the effectiveness of the two rule sets against a baseline Random condition in which the tutor made random yet reasonable decisions. We explored two types of trees (weighted and unweighted) as well as two pruning strategies (pre- and post-pruning). We found that post-pruned weighted trees produced the best results with 529 DT-RL rules. The empirical evaluation was conducted in a classroom study using an existing Intelligent Tutoring System (ITS) named Pyrenees. 153 students were randomly assigned to three conditions. The procedure was the same for all students with domain content and required steps strictly controlled. The only substantive differences between the three conditions were the policy: (Full-RL vs. DT-RL vs. Random). Our result showed that as expected the machine induced policies (Full-RL and DT-RL) are significantly more effective than the random policy; more importantly, no significant difference was found between the Full-RL and DT-RL policies though the number of DT-RL rules is less than 15% of the number of the Full-RL rules and the former group also took significantly less time than the latter. [For the full proceedings, see ED596512.] (As Provided). |
Anmerkungen | International Educational Data Mining Society. e-mail: admin@educationaldatamining.org; Web site: http://www.educationaldatamining.org |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2020/1/01 |