The development of AI bots with the ability to write plagiarism-free undergraduate assignments has thrown many academics into panic mode and created a flood of articles on the topic. However, all this anxiety about ChatGPT seems to be misdirected.
The problem here is not detecting and proving that submitted work is not a student’s own. Neither is it designing bespoke assignments that cannot be carried out by artificial intelligence. The scandal that should be grabbing the headlines is the fact that for a generation we have been training our undergraduates to be nothing more than AI bots themselves; this is why it is not possible to tell their work apart.
For a long time, quality-assurance speak in higher education has been dominated by the language of Bloom’s taxonomy. The higher levels of learning associated with “graduateness” have been simplistically termed evaluation, synthesis and analysis and have been measured by assessing a student’s ability to “compare and contrast” or to “discuss the advantages of…” or “analyse the impact of…”
But none of this requires original thought. It is more a case of sifting through other people’s learning, ideas and thoughts. The student doesn’t even need to understand any of the information they are regurgitating any more than the AI bot does.
Moreover, this collating task is only getting easier as the internet makes an almost infinite amount of predigested information available. Students and bots alike can now give the impression of evaluating and analysing because virtually every piece of evaluation and analysis has already been carried out by multiple authors. All the task entails is locating the information (which is now trivial), paraphrasing it and perhaps regurgitating it under exam conditions.
To be fair, it was not always so easy. Historically, the ability to rationally collate and summarise information in an original form of words was a high-level skill. And, of course, students will always need to learn to synthesise and critique knowledge; we still teach basic maths even though we have easy access to calculators.But just as mental arithmetic is longer a particularly marketable skill, neither will synthesis and critique be – particularly as, as with calculators, the machines, with their superior ability to wade through data, will be considerably better at it.
During my career in higher education, I have encountered this problem in all the institutions I have interacted with, in the UK and abroad. There are few occasions on which undergraduates are required to truly demonstrate understanding – and those remaining occasions often result in disappointment.
One such example is in the final paragraphs of final-year projects, when students are asked: “How might your own research be improved?” Sadly, the standard response to this question is: “I need to repeat the study with more observations.” Although this may sometimes be a valid answer, it typically misses the point because often there was, in reality, no effect of A on B; any relationship between them is one of correlation rather than causation. More measures of this fact will not change it; it will simply result in greater confidence that A does not influence B. However, many students seem to think that if they had collected more data, then A would have miraculously altered the outcome B.
Such cases illustrate that students who otherwise appear to have all the cognitive abilities expected of a graduate have gaping holes in their skill sets when they are required to solve a problem whose answer is not already online multiple times.
This is, of course, not the student’s fault. It is our fault as academics. Perhaps I should have at least raised the point at external examiners’ meetings. But critical friends have boundaries, and this debate needs to occur at a higher level.
Advancements in AI offer us an opportunity to stop focusing on teaching how to solve problems that have already been answered and put more emphasis on how to recognise and tackle those problems remaining. We should relish that opportunity, not run scared from it.
John Warren is emeritus vice-chancellor of thePapua New Guinea University of Natural Resources and Environment.