Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enNikolic, Sasha; Daniel, Scott; Haque, Rezwanul; Belkina, Marina; Hassan, Ghulam M.; Grundy, Sarah; Lyden, Sarah; Neal, Peter; Sandison, Caz
TitelChatGPT versus Engineering Education Assessment: A Multidisciplinary and Multi-Institutional Benchmarking and Analysis of This Generative Artificial Intelligence Tool to Investigate Assessment Integrity
QuelleIn: European Journal of Engineering Education, 48 (2023) 4, S.559-614 (56 Seiten)Infoseite zur Zeitschrift
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Nikolic, Sasha)
ORCID (Daniel, Scott)
ORCID (Haque, Rezwanul)
ORCID (Hassan, Ghulam M.)
ORCID (Lyden, Sarah)
ORCID (Neal, Peter)
ORCID (Sandison, Caz)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN0304-3797
DOI10.1080/03043797.2023.2213169
SchlagwörterArtificial Intelligence; Performance Based Assessment; Engineering Education; Integrity; Prompting; Universities; Foreign Countries; Benchmarking; Interdisciplinary Approach; Evaluation Methods; Undergraduate Students; Computer Assisted Testing; Writing Evaluation; Australia
AbstractChatGPT, a sophisticated online chatbot, sent shockwaves through many sectors once reports filtered through that it could pass exams. In higher education, it has raised many questions about the authenticity of assessment and challenges in detecting plagiarism. Amongst the resulting frenetic hubbub, hints of potential opportunities in how ChatGPT could support learning and the development of critical thinking have also emerged. In this paper, we examine how ChatGPT may affect assessment in engineering education by exploring ChatGPT responses to existing assessment prompts from ten subjects across seven Australian universities. We explore the strengths and weaknesses of current assessment practice and discuss opportunities on how ChatGPT can be used to facilitate learning. As artificial intelligence is rapidly improving, this analysis sets a benchmark for ChatGPT's performance as of early 2023 in responding to engineering education assessment prompts. ChatGPT did pass some subjects and excelled with some assessment types. Findings suggest that changes in current practice are needed, as typically with little modification to the input prompts, ChatGPT could generate passable responses to many of the assessments, and it is only going to get better as future versions are trained on larger data sets. (As Provided).
AnmerkungenTaylor & Francis. Available from: Taylor & Francis, Ltd. 530 Walnut Street Suite 850, Philadelphia, PA 19106. Tel: 800-354-1420; Tel: 215-625-8900; Fax: 215-207-0050; Web site: http://www.tandf.co.uk/journals
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "European Journal of Engineering Education" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: