Three months after the release of ChatGPT, academia is still debating its implications for teaching, learning and assessments. The conversation seems to be dominated by the dangers posed by the tool to academic integrity. It’s ironical that one of the landmark technological advancements of our time, a programme that many believe has passed the Turing Test, is labelled as a rogue tool that helps students to cheat. As educators, shouldn’t we start by celebrating a wonderful new development and then talk about how we can leverage this innovation for better teaching and learning?
ChatGPT is a chatbot developed by OpenAI, based on an approach to transfer learning (knowledge gained by solving one problem is used to solve a related but different problem) known as generative pre-trained transformer (GPT). It has been freely available from November 2022 and quickly became popular due to its high-quality responses and its ability to ‘converse’ on a wide range of topics.
These ‘conversations’ may lead to assessment work such as essays, presentations, dissertations and source code being generated by ChatGPT or similar tools. The way that these tools work, the same question posed to the tool results in different outputs. Assessment work generated by ChatGPT-like tools can be difficult to identify as being machine-generated.
It’s heartening to see that The Quality Assurance Agency briefing paper titled ‘The rise of artificial intelligence software and potential risks for academic integrity’ released on 30 January 2023, lists communication with students on the implications of ChatGPT-like tools as the first course of action. Students are not naturally inclined to cheat. The bitter truth is that students are driven to cheat by educators and an education process that fails to impart students with the required knowledge, skills and attitudes on which they are assessed.
Even from before ChatGPT entered the picture, educators have been handling the wrong end of the stick by focusing on plagiarism detection using tools such as Turnitin rather than ensuring students are ready to face assessments; this is akin to penalizing errant drivers without giving them adequate driving lessons. Getting students ready to face assessments necessarily involves a conversation with the students on the importance of making an honest genuine effort, the importance of attribution as a means of giving credit where credit is due as well as how dishonest practices will undermine not only their learning but also the confidence that society has on the education system as a whole. These conversations are important and should not be relegated to the introduction of an academic integrity policy and are as important as the conversations on the limitations of ChatGPT-like tools.
Another positive discussion revolves around the design of authentic and innovative forms of assessment that obviates the need for detecting text generated by ChatGPT-like tools. Assessments must be authentic by getting students to complete a complex task that they would encounter in a real-world setting or localized context that they can relate to. The localized context may even be a classroom discussion; ChatGPT does not know what happens in our classroom. There is also discussion around shifting the focus of the assessments from the end-product to the learning process or learning gain when developing the end-product i.e. rather than assessing an essay, assess the process of ideating, outlining, drafting, reviewing and revising an essay. There are also suggestions to incorporate into assessments ways in which students can demonstrate their unique style, interests and worldview. These are positive discussions that signal the death of rote-assessments. ChatGPT may very well be the nudge that we needed to finally get our assessments right.
Finally, the bravest of conversations advocate the use of ChatGPT-like tools within the learning process including the assessment itself. Students using ChatGPT-like tools as a sounding board to gain an initial understanding of a topic may be a reality that we have to live with. It’s one evolutionary step from the use of Wikipedia to get an initial idea about a complex topic. However, in the case of ChatGPT, students will have to be mindful that it was trained on data from 2021, that it can be wrong and that it can generate seemingly plausible but nonsensical answers. That students must gauge the answers that they get from ChatGPT is good for learning compared to passive reading and acceptance of everything that is read as fact. Educators could ask students to compare their answers to answers generated by ChatGPT-like tools. Indeed, this kind of comparison could be the basis for an assessment.
ChatGPT has opened a world of opportunities for educators. Let us embrace ChatGPT-like tools as teaching tools and use its advent to make a quantum leap in teaching, learning and assessment methods.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of SLANSHEI or its members