Literaturnachweis - Detailanzeige
Autor/inn/en | Li, Xiao; Xu, Hanchen; Zhang, Jinming; Chang, Hua-hua |
---|---|
Titel | Deep Reinforcement Learning for Adaptive Learning Systems |
Quelle | In: Journal of Educational and Behavioral Statistics, 48 (2023) 2, S.220-243 (24 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Li, Xiao) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1076-9986 |
DOI | 10.3102/10769986221129847 |
Schlagwörter | Learning Processes; Models; Algorithms; Individualized Instruction; Instructional Materials; Instructional Design; Student Characteristics; Markov Processes; Decision Making; Artificial Intelligence; Intelligent Tutoring Systems; Item Response Theory; Probability; Reinforcement; Measurement Techniques; Teaching Methods Learning process; Lernprozess; Analogiemodell; Algorithm; Algorithmus; Individualisierender Unterricht; Lehrmaterial; Lehrmittel; Unterrichtsmedien; Lesson concept; Lessonplan; Unterrichtsentwurf; Markowscher Prozess; Decision-making; Entscheidungsfindung; Künstliche Intelligenz; Intelligentes Tutorsystem; Item-Response-Theorie; Wahrscheinlichkeitsrechnung; Wahrscheinlichkeitstheorie; Positive Verstärkung; Messtechnik; Teaching method; Lehrmethode; Unterrichtsmethode |
Abstract | The adaptive learning problem concerns how to create an individualized learning plan (also referred to as a learning policy) that chooses the most appropriate learning materials based on a learner's latent traits. In this article, we study an important yet less-addressed adaptive learning problem--one that assumes continuous latent traits. Specifically, we formulate the adaptive learning problem as a Markov decision process. We assume latent traits to be continuous with an unknown transition model and apply a model-free deep reinforcement learning algorithm--the deep Q-learning algorithm--that can effectively find the optimal learning policy from data on learners' learning process without knowing the actual transition model of the learners' continuous latent traits. To efficiently utilize available data, we also develop a transition model estimator that emulates the learner's learning process using neural networks. The transition model estimator can be used in the deep Q-learning algorithm so that it can more efficiently discover the optimal learning policy for a learner. Numerical simulation studies verify that the proposed algorithm is very efficient in finding a good learning policy. Especially with the aid of a transition model estimator, it can find the optimal learning policy after training using a small number of learners. (As Provided). |
Anmerkungen | SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: https://sagepub.com |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |