Researchers have established guidelines for the “ethical” use of artificial intelligence in academic writing to prevent against problematic research practices.
The framework, published in Nature Machine Intelligence by academics from the universities of Cambridge, Copenhagen and Oxford, as well as the National University of Singapore and other institutions, warns that the increasing prevalence of large language models (LLMs) – such as ChatGPT – in academia creates risks relating to plagiarism, authorship attribution, and the “integrity of academia as a whole”.
The article notes that, while the use of LLMs is controversial, leading journals including NEJM AI “now actively encourage that they be used – responsibly – to enhance the quality of submissions”.
The framework recommends that any manuscripts that use LLMs provide clear acknowledgement and transparency over their usage, employ human vetting to guarantee the accuracy and integrity of any work produced by AI, and ensure there has been a “substantial human contribution” to the work.
Human vetting would ensure the author would thereby have to “take responsibility” for any claims made in an LLM-assisted paper.
Timo Minssen, professor of law at the University of Copenhagen and a co-author of the report, said guidance was “essential” in shaping the ethical use of AI in academic research, especially when it is used to co-create articles.
“Appropriate acknowledgment based on the principles of research ethics should ensure transparency, ethical integrity, and proper attribution. Ideally, this will promote a collaborative and more inclusive environment where human ingenuity and machine intelligence can enhance scholarly discourse," Professor Minssen said.
The calls for guidance come amid growing debate over the use of AI within academia and whether universities should curtail or embrace the technology, with recent research suggesting that academics are using AI tools more than students.
The paper outlines a template – which the authors call the “LLM Use Acknowledgement” – which researchers can use when submitting manuscripts to establish their use of such technology in their work. The academics claim that this will “streamline adherence to ethical standards in AI-assisted academic writing, and provide greater transparency about LLM use”.
LLMs should “neither lower nor raise the standard of responsibility that already exists in traditional research practices”, the paper argues, adding that guidance is necessary to “ensure that LLMs are used in ways that reinforce, rather than erode, trust in research”.
How to navigate the grey areas of AI ethics
Such an approach was appropriate, it says, as “it aims to ensure that the integration of new technologies does not come at the cost of established academic values and practices”.
Julian Savulescu, director of the Oxford Uehiro Centre for Practical Ethics and a fellow co-author of the paper, described LLMs as “the Pandora’s Box” for academic research.
“They could eliminate academic independence, creativity, originality and thought itself. But they could also facilitate unimaginable co-creation and productivity. These guidelines are the first steps to using LLMs responsibly and ethically in academic writing and research.”