Academics despair as ChatGPT-written essays swamp marking season

‘It’s not a machine for cheating; it’s a machine for producing crap,’ says one professor infuriated by rise of bland scripts

June 17, 2024
James Hinchcliffe holds a poop emoji while filming on a screen to illustrate Academics despair as ChatGPT-written essays swamp marking season
Source: Richard Rodriguez/Getty Images for Texas Motor Speedway

The increased prevalence of students using ChatGPT to write essays should prompt a rethink about whether current policies encouraging “ethical” use of artificial intelligence are working, scholars have argued.

With marking season in full flow, lecturers have taken to social media in large numbers to complain about AI-generated content found in submitted work.

Telltale signs of ChatGPT use, according to academics, include little-used words such as “delve” and “multifaceted”, summarising key themes using bullet points and a jarring conversational style using terms such as “let’s explore this theme”.

In a more obvious giveaway, one professor said an advert for an AI essay company was buried in a paper’s introduction; another academic noted how a student had forgotten to remove a chatbot statement that the content was AI-generated.

“I had no idea how many would resort to it,” admitted one UK law professor.

Des Fitzgerald, professor of medical humanities and social sciences at University College Cork, told Times Higher Education that student use of AI had “gone totally mainstream” this year.

“Across a batch of essays, you do start to notice the tics of ChatGPT essays, which is partly about repetition of certain words or phrases, but is also just a kind of aura of machinic blandness that’s hard to describe to someone who hasn’t encountered it – an essay with no edges, that does nothing technically wrong or bad, but not much right or good, either,” said Professor Fitzgerald.

Since ChatGPT’s emergence in late 2022, some universities have adopted policies to allow the use of AI as long as it is acknowledged, while others have begun using AI content detectors, although opinion is divided on their effectiveness.

According to the latest Student Academic Experience Survey, for which Advance HE and the Higher Education Policy Institute polled around 10,000 UK undergraduates, 61 per cent use AI at least a little each month, “in a way allowed by their institution”, while 31 per cent do so every week.


Campus resource: Can we spot AI-written content?


Professor Fitzgerald said that although some colleagues “think we just need to live with this, even that we have a duty to teach students to use it well”, he was “totally against” the use of AI tools for essays.

“ChatGPT is completely antithetical to everything I think I’m doing as a teacher – working with students to engage with texts, thinking through ideas, learning to clarify and express complex thoughts, taking some risks with those thoughts, locating some kind of distinctive inner voice. ChatGPT is total poison for all of this, and we need to simply ban it,” he said.

Steve Fuller, professor of sociology at the University of Warwick, agreed that AI use had “become more noticeable” this year despite his students signing contracts saying they would not use it to write essays.

He said he was not opposed to students using it “as long as what they produce sounds smart and on point, and the marker can’t recognise it as simply having been lifted from another source wholesale”.

Those who leaned heavily on the technology should expect a relatively low mark, even though they might pass, said Professor Fuller.

“Students routinely commit errors of fact, reasoning and grammar [without ChatGPT], yet if their text touches enough bases with the assignment they’re likely to get somewhere in the low- to mid-60s. ChatGPT does a credible job at simulating such mediocrity, and that’s good enough for many of its student users,” he said.

Having to mark such mediocre essays partly generated by AI is, however, a growing complaint among academics. Posting on X, Lancaster University economist Renaud Foucart said marking AI-generated essays “takes much more time to assess [because] I need to concentrate much more to cut through the amount of seemingly logical statements that are actually full of emptiness”.

“My biggest issue [with AI] is less the moral issue about cheating but more what ChatGPT offers students,” Professor Fitzgerald added. “All it is capable of is [writing] bad essays made up of non-ideas and empty sentences. It’s not a machine for cheating; it’s a machine for producing crap.”

jack.grove@timeshighereducation.com

POSTSCRIPT:

Print headline: Academics despair over ‘crap’ AI essays

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (19)

" 'It’s not a machine for cheating; it’s a machine for producing crap,’ says one professor infuriated by rise of bland scripts." Too much focus on students? Why am I reminded of management-related academic journal articles and conference papers?
Good or bad AI is the future , students have embraced it and Academics need to accept that they are going to need to change their teaching and assessment methods to keep pace.
The genie is very much out of the bottle. Thus we need to deal with it somehow. But currently, it is pretty damaging. I totally agree that the way it is currently appearing (and certainly some aspects are very recognizable), leads to poor writing, poor standards and poor scholarship. The point should be to learn, to explore, to be enthused and explore in-depth, not to churn out crap for marks. And yes, perhaps we need to change the way we teach and assess, but the pace is too fast for it just to come from individual dedicated academics.
All of this sounds very familiar. Essay mills were already a big problem and AI just makes this problem worse. I guess one blunt approach would be to assess entire classes exclusively by examination. One colleague said, 'that will never happen as overseas students won't choose to come to Univ of XXX if they're actually meaningfully assessed'. Honestly, I have nothing but contempt for colleagues who say 'its the future. We must embrace it' even when embracing AI sounds the death knell for all we do as a profession. The consequence of student use of AI is to deny the chance for students to develop their ideas in a more thoughtful, considered way through an essay. Its a real pity that students are outsourcing their thinking to technology and soul destroying for those of us who have to mark the resulting bilge. I fear for the future if we are producing a generation of people unable to synthesise data and ideas in a coherent argument.
In France, substantive assessment at my secondary school are all written during class time, usually with no notes. In-class discussion, classwork, quizzes further confirms how well a student is learning the skills and content. Even pre-AI, anything done at home could easily be written by someone else, or a savvy parent/tutor could feed the student the ideas and polish the final product. So there was never any reason to ever think something done out of class is automatically going to be the students' own work, Chat GDT just makes it more obvious. If we want university students to learn, we have to invest in actually teaching them.
At my institution you cannot just mark down for AI use, you have to report it. But here’s the thing, the university’s policy is basically the usual feel good waffle and the investigating officer is seriously workshy. So you just mark down and in exam boards say the work was of a lower standard than usual. Getting to the point where you may as well either use AI to mark or just award grades at entry (you couldn’t do it for attendance because even that is too hard).
The only solution is in class exams, no books, and in class presentations. If students don’t want that they do not have to do a degree.
Our degrees are about 60-70% invigilated exam, and about 30-40% coursework. Currently, chat GPT is capable of producing an essay that will score in the mid 50s on my programs. Given that mid 50s is not enough for further study, nor most gtaduate jobs, and any student that has relied on GPT will get found out in the exam, I don't see a grading problem. The real problem is convincing students of this - that the only person they are cheating by using GPT is themselves.
Try and focus the essay subject locally, in the university city. Works for social science type essays, geography and economics anyway. Let's see how well chatbot knows local history etc.
I remember when calculators became mainstream and everyone was up in arms about that. People use online resources now for source material and then embed it in essays or journals - AI is just a short cut. It's a research tool with the added benefit of compiling and punctuating content. So unless the course is English, it's not really that different from other forms of digital information harvesting. Unless you're only actually testing how good a memory someone has? The academic world just need to adapt and analyse the submissions more thoroughly for comprehension, a good argument or making of the case and an indication that the person submitted has some comprehension of the task in hand. More frequent contact with students will help set personal benchmarks and a better assessment of capability.
Is there not a more significant issue here that is being masked by the current focus on AI which is standards, student engagement and capacity to write coherent sentences and essays. Almost a quarter of degrees award first class honours, when not so long ago such accolades were as rare as hen's teeth! Continuing in this vein is undermining what the academy purportedly stands for-- the pursuit of excellence not merely the promotion of critical thinking!
The article provides a well-rounded exploration of the current challenges and debates surrounding the use of ChatGPT in academic essay writing. It effectively highlights the concerns of educators, the specific indicators of AI-generated content, and the varied responses from universities and professors. Here are some positive and negative aspects of the article: ### Positive Aspects: 1. **Comprehensive Coverage**: The article covers different perspectives from multiple academics, providing a balanced view of the issue. 2. **Specific Examples**: It includes concrete examples of how AI usage is detected, such as unusual word choices and structural elements, which help illustrate the problem. 3. **Data Inclusion**: The reference to the Student Academic Experience Survey adds empirical data, supporting the claims about AI use among students. 4. **Varied Opinions**: It presents diverse viewpoints, from those advocating for a total ban on AI tools to those suggesting conditional acceptance, which enriches the discussion. ### Negative Aspects: 1. **Limited Solutions**: While it identifies the problem clearly, the article could delve deeper into potential solutions or best practices for managing AI in education. 2. **Tone**: The tone might come across as somewhat alarmist, particularly with strong language like "total poison" and "machine for producing crap." This could overshadow a more nuanced discussion. 3. **Balance of Opinions**: Although varied, the article leans heavily towards negative perceptions of AI in education, potentially underrepresenting positive or neutral viewpoints. ### Overall Assessment: The article is generally good in its thorough exploration of the issue, providing valuable insights and sparking a necessary conversation about AI's role in education. However, a more balanced tone and deeper exploration of solutions would enhance its quality.
Was this written using AI? Just curious...
Love this response - completely encapsulates the problem that I was writing about. Bland both-sideism is the default mode for AI bots, where language that conveys an argument with colour and panache seen as problematic. Also, complete lack of empathy for dedicated educators who hate to see students coasting to mediocrity when they could push themselves. No understanding of journalism, either and the word constraints that writers have.
Unless you want to be an academic then your world of work will be 90% excel spreadsheets and a little knowledge from the company portal or Google search. The requirement to write in Greek or Latin has long since gone and the beads have been replaced by computers. For the bland of corporate AI is the perfect tool for generating functional rich business focus strategic policy centralised on profitable exploitation of deterministic profit maximising customer needs.
Agree, use of Gen AI tools is diluting the cognitive processing capabilities of our students. Universities should fight essay mills and this. Ideally, the need for writing essay type questions should be minimised. A typical business management under graduate student will be writing at least 20 1500 to 2500 word reports in their degree program. How many of the job out there require employees to write 1500 word reports? I think we should reduce the word length to 500 to 600 words and they should be invited for a Q and A session wherein they would be asked to explain their understanding atleast one of the work they submitted to a panel of academics at the end of semester. If the panel is not convinced, all their marks should be approved. If the panel has doubts, they can undertake further investigations. Adopting this approach will encourage students to be more aware about their submissions and having shorter reports will make it easier for academics to mark. Being limited in word count, students are less likely to use Chat GPT?
I work with international students whose English is intermediate and they simply don't have the requisite language skills to read or write critically. It is no coincidence that AI use is increasingly prevalent. Asking an intermediate English user to write a 3000 word essay that synthesises and paraphrases is simply a bridge too far for the majority of our international students.
Oral exams is an option. I have worked in institutions that use them. AI will be of little use especially where students are asked to respond to problem based scenarios that require critical thinking.
The genie is indeed out of the bottle with quite a few significant repercussions for students and higher education.

Sponsored