Blind faith in tech bros driving cheating, say ChatGPT critics

University of Glasgow philosophers behind viral ‘ChatGPT is bullshit’ paper claim student AI use is linked to dubious techno optimism of billionaire Silicon Valley moguls

July 2, 2024
Source: istock

Misguided techno-optimism – driven by the enormous media profile of billionaire “tech bros” such as Elon Musk – could explain why more students are asking artificial intelligence to write their essays despite the mediocre results it returns, the authors of a viral research paper have argued.

“ChatGPT is bullshit” has been read more than 400,000 times since it was published in Ethics and Information Technology in early June.

The paper frames large language models using Princeton University philosopher Harry Frankfurt’s influential 2005 book On Bullshit and takes issue with the term “AI hallucinations”, suggesting that the outright falsehoods often generated by AI should be understood as “bullshit” rather than by a more flattering metaphor that humanises AI.

This would correct a growing view that these machines “are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived”, explains the paper by University of Glasgow philosophers Michael Hicks, James Humphries and Joe Slater.

The paper’s runaway success comes amid growing reports during this year’s marking season of widespread AI use by students. Some academics have dubbed the bland AI-written content found in student scripts “botshit”.

Dr Hicks said he hoped the paper’s suggested terminology might dissuade students from using the untrustworthy technology.

“If students are unprepared for class, they might feel that it’s easier to outsource their thinking to a large language model that is powered by an expensive and hugely hyped supercomputer – particularly when it acts like it understands things. In fact, students probably understand things better than they think,” said Dr Hicks, adding that he had recently seen “many more C and D marks” among first- and second-year students, most likely as a result of AI.

Dr Humphries suggested students’ misguided faith in ChatGPT to write essays was linked to a wider belief that technology represents a panacea for most of society’s ills.

“Over the last five or so years we have been told that tech bros will solve our problems and large language models should be understood in this context,” said Dr Humphries, who claimed Elon Musk’s proposed solution to California’s public transport problems – an expensive underground hyperloop transit system – showed that this faith was not always well earned.

“The world has problems but too often we’re told it’s better to give the power to someone who has done a bit of computer programming,” he said.

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (2)

new
Spot on. Glad to see that I am not alone in seeing bland regurgitated AI content in students essays that score low grades. Brace yourself for the appeals!
new
Interesting comment, Happy. When you say "low grades" do you mean you fail students' essays if you detect AI content, or do you pass them to minimise appeals? Just wondering how you and others handle such discovery.

Sponsored