First AI ethics policy unveiled by Cambridge University Press

New guidelines on use of ChatGPT follows plagiarism concerns and authorship controversies caused by rise of generative AI

March 14, 2023
Source: istock

A leading university press has unveiled its first artificial intelligence (AI) ethics policy, which will require authors to declare any use of ChatGPT and other generative AI tools.

Under the guidelines published by Cambridge University Press (CUP) on 14 March, researchers will also be banned from treating AI as an “author” of academic papers and books, following recent controversies in which ChatGPT was given an author byline in several journals.

The rules from CUP, which publishes about 400 journals and 1,500 monographs a year, also seek to clarify grey areas where text generation by an AI bot has led to plagiarism, sometimes unwittingly.  Authors will be “accountable for the accuracy, integrity and originality of their research papers, including for any use of AI”, explain the new guidelines.

“Scholars have been told the work must be the author’s own, and they must not present others’ ideas, data, words or other material without adequate citation and transparent referencing,” they add.

ADVERTISEMENT

Mandy Hill, managing director for academic at CUP, said the AI ethics policy was designed to give confidence to researchers who wished to use ChatGPT and other AI tools.

“We believe academic authors, peer reviewers and editors should be free to use emerging technologies as they see fit within appropriate guidelines, just as they do with other research tools,” said Ms Hill.

ADVERTISEMENT

“Like our academic community, we are approaching this new technology with a spirit of critical engagement. In prioritising transparency, accountability, accuracy and originality, we see as much continuity as change in the use of generative AI for research,” she added, stating that the new policy aims to “help the thousands of researchers we publish each year, and their many readers. We will continue to work with them as we navigate the potential biases, flaws and compelling opportunities of AI for research.”

While the guidelines were welcomed by R. Michael Alvarez, professor of political and computational social science at the California Institute of Technology, who uses large language models to detect online harassment, trolling and abusive behaviour on social media platforms and in video games such as Call of Duty, further dialogue on their use was needed, he said.

The rise of generative AI “introduces many issues for academic researchers and educators – I anticipate the opportunities and pitfalls presented by generative AI for academic publishing for many years to come”, said Professor Alvarez, co-editor of the CUP title Quantitative and Computational Methods for Social Science.

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored

ADVERTISEMENT