Universities’ ChatGPT misconduct focus ‘panicked students’

Institutions ‘got it wrong’ by putting all the attention on assessment when large language models first launched, say experts

April 16, 2024
Source: iStock/kristo74

Universities’ focus on assessment misconduct in the wake of the emergence of large language models “panicked” students, and institutions would have been better being “honest” that they were still figuring out the ramifications of new technologies, according to experts.

Mark Simpson, deputy vice-chancellor of Teesside University, told Times Higher Education’s Digital Universities UK event that the sector “got it wrong” when ChatGPT was launched 18 months ago.

“Our initial reaction to AI was to look at ways of trying to detect it, and we quickly moved to write it into academic misconduct regulations. So our conversation with students was, ‘Use AI and we will punish you.’ That put the whole student body into panic,” he told the event, held at the University of Exeter.

This focus on assessment has deflected from conversations that need to be had about how artificial intelligence can be used in the learning process, Professor Simpson said, which students would see as “much less of a threat”.

ADVERTISEMENT

“It is important to have a framework in which everybody is operating, and it has to be nuanced to a local level,” he added. “But we all like certainty as well, and I think students being assessed want that certainty. So we have got to be certain where we can be, but keep that dialogue open with our students and accept that this is something we are learning together.

“This technology is going to keep changing and evolving, and the way we respond to that will change as well.”

ADVERTISEMENT

Miriam Firth, the academic lead for assessment at the University of Manchester, agreed that how students use AI in assessment continues to preoccupy many colleagues.

This is mirrored in what students raise as concerns, said Dr Firth, who conducted a research study for sector technology body Jisc last year that looked at student perceptions of AI.

“We found students are concerned about the guidance they have been given. They want far more explicit information on, ‘Can I use this, should I be using it, and if so how?’” she said.

Another student concern was focused on whether their lecturers were going to use AI to create classes and fears that course content was being created by computers and not human experts, Dr Firth added.

She said honesty with students was key to addressing their worries. “If it is the first time applying a code of conduct on using AI for assessment, say that,” she said.

ADVERTISEMENT

“It is about having an open, co-created, co-guided approach while admitting we don’t have the answers right now and things are changing so quickly that if we try to put more boundaries in the way we are only going to trip ourselves up and create more problems than solutions.”

Matthew Yee-King, an academic in the department of computing at Goldsmiths, University of London who runs the University of London’s online computer science degree, said the next developments in AI would focus on creating models that could formulate a plan and then enact it, which would represent another step change in how powerful the technology can be.

But Andy Beggan, the dean of digital education at the University of Lincoln, cautioned that the sector was still in the midst of a “hype cycle”, with AI “solutions” being created that do not really solve a recognised problem.

ADVERTISEMENT

It will only be when universities get through this moment that the real value of AI and how it can be leveraged can begin to be understood without “overestimating” its impact, said Mr Beggan.

Teesside’s Professor Simpson said there should also be more reflection on what AI means for student mental health, as institutions refine their responses to the new technology.

“We have got issues of digital inclusion and access to AI and additional pressures in the danger that AI keeps increasing the standard we expect to see within students’ work,” he said.

“That will differ according to skill sets, and we’ve also got all of the issues of digital forums and collusion – all of that space needs to be considered into this one.”

ADVERTISEMENT

tom.williams@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored

ADVERTISEMENT