Using Twitter to measure research impact ‘needs quality control’

Large-scale study raises questions about the usefulness of altmetrics for assessing research quality 

七月 23, 2018
Two people silhouetted against Twitter logo
Source: Getty

Twitter mentions should only be used to evaluate the impact of research that has already been deemed to be of a certain quality standard, the results of a new study have suggested.

Researchers from Germany’s Max Planck Society analysed the impact of more than 30,000 papers selected for F1000Prime, a post-publication peer review system where academics in the life sciences recommend and rate papers they feel are of high quality.

The authors – Lutz Bornmann and Robin Haunschild – compared the ratings given to the papers with their performance in traditional citation-based metrics as well as alternative metrics like tweets and reader views on the online academic network Mendeley.

They also looked at the overall alternative metric “attention score” published by Altmetric, which takes into account other online mentions of research such as news sites and blogs in addition to social networks.

Their statistical analysis – published in the open access journal Plos One – indicated “that citations and reads are significantly stronger related to quality (as measured by [F1000Prime ratings]) than tweets (and the Altmetric attention score)” and therefore “might question the use of tweets (and the Altmetric attention score) for research evaluation purposes”.

They acknowledge that Twitter may be useful for considering the impact of research on the wider public rather than just within academic circles.

But they add that “since one cannot expect that non-experts assess the quality of papers properly, research evaluations using Twitter data should only include papers fulfilling certain quality standards”. 

The authors note that such an approach is taken by the UK’s research excellence framework, which stresses that research impacts beyond academia can be considered as long as they link back to studies “of reasonably high international standard”.

“We do not want to argue with these proposals against measurements of public opinion or attention of papers in scientometrics,” the authors state.

“These investigations allow interesting insights in the popularity of research topics and themes. However, we would like to point with our study to the general necessity of considering scientific quality standards in research evaluations (using altmetrics).”

The study adds to a growing body of academic work on the usefulness of alternative metrics such as mentions on Twitter.

In a conference paper published earlier this year in Communications in Computer and Information Sciencethe top 100 cited papers in Clarivate’s Web of Science database were compared with 100 papers with the highest Altmetric attention scores in one year.

The study, by four India-based computer scientists at the South Asian University in New Delhi and Banaras Hindu University in Varanasi, found that just 12 papers were common to both lists, with half of those about the 2016 Zika virus outbreak.

They noted that the journals carrying the top 100 cited papers and those containing papers with the highest Altmetric scores were also "quite different”, with an “overlap of a mere 10 journals” including well-known titles such as Nature and The Lancet.

The authors say that for highly cited papers and those with the top Altmetric scores it does not appear “that one can predict the other” as the “processes involved appear to be disjoint[ed]”. However, similar to the other study, they suggest that altmetrics can offer "additional information to the citation score”.

simon.baker@timeshighereducation.com                         

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.