Paper rejected after plagiarism detector stumped by references

Editors say episode demonstrates need for humans to review work of software

June 26, 2019
Dalek
Source: Alamy

News that robots are coming to steal our jobs may have been underestimated, following an incident which suggests that automation is going a step further in preventing human discoveries being published at all.

Jean-François Bonnefon, a research director at Toulouse School of Economics, told peers of his surprise in learning that a paper he submitted to an unnamed journal had been “rejected by a robot”.

According to Dr Bonnefon, “the bot detected ‘a high level of textual overlap with previous literature’. In other words, plagiarism.” On closer inspection, however, the behavioural scientist saw that the parts that had been flagged included little more than “affiliations, standard protocol descriptions [and] references” – namely, names and titles of papers that had been cited by others.

“It would have taken two [minutes] for a human to realise the bot was acting up,” he wrote on Twitter. “But there is obviously no human in the loop here. We’re letting bots make autonomous decisions to reject scientific papers.”

ADVERTISEMENT

Reaction to the post by Dr Bonnefon, who is currently a visiting scientist at the Massachusetts Institute of Technology, suggested that his experience was far from unique. “Your field is catching up,” said Sarah Horst, professor of planetary science at Johns Hopkins University, “this happened to me for the first time in 2013.”

Sally Howells, managing editor of the Journal of Physiology and Experimental Physiology, said that her publications and most others used Turnitin’s iThenticate to detect potential plagiarism.

ADVERTISEMENT

“However, this is the first time that I have seen a ‘desk rejection’ based solely on the score,” she said.

Ms Howells said that most editors would ask the system to exclude references from a plagiarism scan. “The software is incredibly useful, but must always be checked by a human,” she said. “Thankfully there are still a few of them left.”

Kim Barrett, editor-in-chief of The Journal of Physiology and distinguished professor of medicine at the University of California, San Diego, agreed that anti-plagiarism tools “need to be used appropriately, and they should never be the basis for an automatic rejection”.

Mark Patterson, executive director of the online megajournal eLife, said that his platform did not use software to screen for plagiarism but did conduct “a number of quality control checks…in addition to the scrutiny by the editors”.

ADVERTISEMENT

“Where computational methods are used at other publishers, staff need to then interpret the findings to avoid situations like the one highlighted,” he said. “In the future, of course, these techniques are likely to get much better.”

rachael.pells@timeshighereducation.com

POSTSCRIPT:

Print headline: Confused robot says no to paper

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Peer review is lauded in principle as the guarantor of quality in academic publishing and grant distribution. But its practice is often loathed by those on the receiving end. Here, seven academics offer their tips on good refereeing, and reflect on how it may change in the years to come

6 December

Reader's comments (2)

...and you think this is exclusive to academe? The big consultancy firms have been selling this idea to large corporations for years. Mostly used in purchasing the accountants like it because it removes any professional judgement, skill and expensive people. Because they control the data, they can prove how much they appear to have saved. Beware - nobody noticed in the commercial world - robots have no sense of value.
It also gets in the way of integrating assessment across a course. Say you have a final year project module and one about testing running concurrently. So the lecturer writing the testing module invites students to use their project in their testing coursework, empowering them to apply their new-learned testing skills to a real piece of work. Great you might think... until they write their project reports and naturally wish to explain how they tested their work. All Turnitin's bells go off at once! Of course this is an opportunity to teach the students about 'self-plagiarism' and the need to reference things that they themselves have written elsewhere - but it's just as well this was spotted in time to give them the necessary guidance BEFORE they handed in their reports :)

Sponsored

ADVERTISEMENT