Abstract

AbstractView references

Large language models, such as ChatGPT, possess the potential to revolutionize educational practices across various domains. Nonetheless, the deployment of these models can inadvertently foster academic dishonesty due to their facile accessibility. In practical courses like programming, where hands-on experience is crucial for learning, relying solely on ChatGPT can hinder students’ ability to engage with the exercises, consequently impeding the attainment of learning outcomes.This paper conducts an experimental analysis of GPT 3.5 and GPT 4, gauging their proficiencies and constraints in resolving a compendium of 22 programming exercises. We discern and categorize exercises based on ChatGPT’s ability to furnish viable solutions, alongside those that remain unaddressed. Moreover, an evaluation of the malleability of the solutions proposed by ChatGPT is undertaken. Subsequently, we propound a series of recommendations aimed at curtailing undue dependence on ChatGPT, thereby fostering authentic competency development in programming. The efficaciousness of these recommendations is underpinned by their integration into the design and delivery of an examination as part of the corresponding course. © 2023 IEEE.

Updated: