It is little comfort to educators that chatbots are not real artificial intelligence. Even though the likes of ChatGPT know nothing in any meaningful sense, their ability to select the next words in a sequence allows them to spoof just about any kind of intellectual labour with which academics might task their students: not only essays but also outlining, summarising, reviewing literature, developing a presentation or coming up with a topic or lists of sources or questions.
As faculty and administrators scramble to make sense of these tools and their implications for teaching, many students have taken the policy vacuum as their cue to do precisely whatever. And some scholars have given their blessing, reasoning that bot-authored text is impossible to detect and that students will need to learn to use this technology in the workplace anyway.
But chatbots are fundamentally anti-scholarly. The connections between their statements and knowledge sources are often completely obscured. They’re unreliable, in some fields achieving only about 20 per cent “correctness”. And their results aren’t repeatable – different runs produce different outputs.
Don’t even get me started on their ethical and political degeneracy. They’re highly profitable commercial services completely dependent on untold quantities of data gleaned from the web without permission, notification or disclosure – including much of scholars’ own hard work (lawsuits have been initiated). If you require students to use them, they must submit data to a company that has not guaranteed data privacy. And even if you don’t, students may be giving them your course materials, such as lecture transcripts.
Chatbots also portend equity issues as some students might not be able to afford the better, more expensive versions. And some students are angry because they sense a race to the bottom – if they don’t use the bots, they will lose, they think. Tension is stoked by conflicting messaging from different instructors. An ethical morass results.
Perhaps there are valid limited uses of the bots, and, yes, they are becoming important in some kinds of professional work. But we can train students on these with mini courses. We don’t need a free-for-all.
My approach – developed while teaching five lecture sections since last May, involving about 500 students combined – is, first, to talk to the students about the importance of academic integrity for both the institution and themselves. Most students never hear these arguments, but they have an intuitive sense that widespread cheating could render their diplomas meaningless since the purpose of assignments is to develop intellectual skills and knowledge: the process, not the product, is the point. I liken a student using ChatGPT to an athlete hiring someone to do their workout for them.
Next, I review some of the epistemic, ethical and political issues with these services. I emphasise that you’re impacting the world by using them (they consume vast amounts of energy and water) and are possibly distorting or limiting your own understanding via biases created by an unaccountable for-profit company.
Then I review my stated course policy: no allowed bot use whatsoever. This might change in the future, I say, but we don’t know enough about these services yet. This precautionary approach reinforces the use of that concept in environmental studies, my field. I tell students that I want to make them my academic integrity collaborators, upholding the quality of their own education. I reinforce that most students don’t cheat.
Now the practicalities. My teaching assistants and I use manual detection and automated machine detection (in trial), with full awareness of the chance of false positives. When we suspect chatbot text, I ask the student to provide a step-by-step description of the process they used to produce the submission. Depending on the case, I say that we will be lenient or give them full amnesty if they did use a chatbot and admit it. The likelihood of false positives means you usually cannot depend on machine or manual detection alone to apply sanctions.
During my summer courses, all remote, we manually detected about 20 suspected cases out of more than 2,000 submissions. In all but one or two cases, students readily admitted chatbot use, accepted our offer to redo the assignment and expressed both remorse and appreciation for the second chance. The ensuing discussions have been occasions for learning and for forging better connections with students. It has been extra work, but so far it hasn’t been onerous.
Since I first began explicitly discussing integrity and how we detect and handle cases at the beginning of this term, we’ve seen no manually detected cases and only a handful of machine-detected cases that looked like false positives, out of some 200 submissions. Sure, it’s possible that we’re missing cases or that students are using the bots for steps other than composing text. But I believe our process is doing as much as possible to minimise bot cheating while enhancing student appreciation of and involvement in their educations.
We can use this approach in combination with some defensive measures, such as doing more assignments in the classroom, in-person presentations, and so on. But we don’t have to take extreme measures, such as abolishing essays entirely. Nor do we need to declare that there’s nothing we can do and surrender to the bots. There is a viable middle ground.
Kenneth Worthy is a lecturer, chancellor’s public scholar and creative discovery fellow at the University of California, Berkeley and adjunct associate professor at Saint Mary’s College of California.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login