Literaturnachweis - Detailanzeige
Autor/inn/en | Dahlkemper, Merten Nikolay; Lahme, Simon Zacharias; Klein, Pascal |
---|---|
Titel | How Do Physics Students Evaluate Artificial Intelligence Responses on Comprehension Questions? A Study on the Perceived Scientific Accuracy and Linguistic Quality of ChatGPT |
Quelle | In: Physical Review Physics Education Research, 19 (2023) 1, Artikel 010142 (25 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Dahlkemper, Merten Nikolay) ORCID (Lahme, Simon Zacharias) ORCID (Klein, Pascal) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
Schlagwörter | Physics; Science Instruction; Artificial Intelligence; Computer Software; Accuracy; Questioning Techniques; Mechanics (Physics); Difficulty Level; Undergraduate Students; Student Attitudes; Introductory Courses; Computational Linguistics; Misconceptions; Item Analysis; Comparative Analysis; Critical Thinking; Foreign Countries; German; Germany Physik; Teaching of science; Science education; Natural sciences Lessons; Naturwissenschaftlicher Unterricht; Künstliche Intelligenz; Befragungstechnik; Fragetechnik; Mechanik; Schwierigkeitsgrad; Schülerverhalten; Einführungskurs; Linguistics; Computerlinguistik; Missverständnis; Itemanalyse; Kritisches Denken; Ausland; Deutscher; Deutschland |
Abstract | This study aimed at evaluating how students perceive the linguistic quality and scientific accuracy of ChatGPT responses to physics comprehension questions. A total of 102 first- and second-year physics students were confronted with three questions of progressing difficulty from introductory mechanics (rolling motion, waves, and fluid dynamics). Each question was presented with four different responses. All responses were attributed to ChatGPT, but in reality, one sample solution was created by the researchers. All ChatGPT responses obtained in this study were wrong, imprecise, incomplete, or misleading. We found little differences in the perceived linguistic quality between ChatGPT responses and the sample solution. However, the students rated the overall scientific accuracy of the responses significantly differently, with the sample solution being rated best for the questions of low and medium difficulty. The discrepancy between the sample solution and the ChatGPT responses increased with the level of self-assessed knowledge of the question content. For the question of highest difficulty (fluid dynamics) that was unknown to most students, a ChatGPT response was rated just as good as the sample solution. Thus, this study provides data on the students' perception of ChatGPT responses and the factors influencing their perception. The results highlight the need for careful evaluation of ChatGPT responses both by instructors and students, particularly regarding scientific accuracy. Therefore, future research could explore the potential of similar "spot the bot" activities in physics education to foster students' critical thinking skills. (As Provided). |
Anmerkungen | American Physical Society. One Physics Ellipse 4th Floor, College Park, MD 20740-3844. Tel: 301-209-3200; Fax: 301-209-0865; e-mail: assocpub@aps.org; Web site: https://journals.aps.org/prper/ |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |