Abstract

Using ChatGPT in education presents challenges for evaluating students. It requires distinguishing between original ideas and those generated by the model, assessing critical thinking skills, and gauging subject mastery accurately, which can impact fair assessment practices. The Human-Computer Interaction course described in this experience report has enabled consultation with textbooks, slides and other materials for over five years. This experience report describes reflections regarding using ChatGPT as a source of consultation in a written HCI exam in 2023. The paper describes experiences with analysis of the types of questions ChatGPT was able to solve immediately without mediation and the types of questions that could benefit from ChatGPT’s assistance without compromising the assessment of higher-level learning outcomes that professors want to analyse in teaching HCI. The paper uses Bloom’s taxonomy to analyse different questions and abilities to be evaluated and how they can be solved solely by using ChatGPT. The paper discusses questions that need mediation, previous lived experience in class and understanding of the knowledge acquired in class that cannot be answered directly by copying and pasting questions into ChatGPT. The discussions can raise reflections on the learning outcomes that can be assessed in HCI written exams and how professors should reflect upon their experiences and expectations for exams in the age of growing generative artificial intelligence resources.

Updated: