Credit: Paulo Buchinho
As a young academic, I am reliably informed that the landscape of scholarly communication is not what it was 20 years ago. But, despite all that has changed, it seems that we still largely rely upon the same tired and narrow measures of quality and academic impact - namely, citation counts and journal impact factors.
As someone who has used the internet in almost every aspect of their academic work to date, it’s hard for me to ignore the fact that these mechanisms, in predating the web, largely ignore its effects.
By holding up these measures as incentives, we appear to have our eye firmly fixed on the hammer and not the nail, adjusting our research habits in order to maximise scores and ignoring issues such as why we publish in the first place.
Some serious academic discussion takes place online. (If you are in any doubt, I recommend a visit to the blog of Timothy Gowers, Rouse Ball professor of mathematics at the University of Cambridge and a Fields Medal recipient.) The utility of the web as a platform for scholarly communication should come as no surprise: indeed, when Sir Tim Berners-Lee sat in his office at Cern and created it, he was motivated by a desire to do just that.
Increasingly, many of our scholarly activities are transferring to the web. We write academic blogs; the backchannel of conferences is played out through Twitter; and reference managers such as Mendeley manage a library of close to 100 million papers for more than 1 million academics. All of this activity indicates impact. What is more, because it is on the web, we can observe and measure it.
In assessing impact, we can and should take advantage of these emerging traces of scholarship - but to do so requires us to broaden both what we measure and how we measure it.
The “alt-metrics” community has recently emerged in an effort to achieve this. Complete with a manifesto - at altmetrics.org - this community is striving to understand and measure the products and practices of scholarly communication on the web. It is called “alt” because the practitioners are looking to move beyond the citation-based measures of impact that have dominated the quantitative study of scholarship to date.
Jason Priem, a PhD student at the University of North Carolina at Chapel Hill and the initiator of the alt-metrics movement, talks passionately about a “second scientific revolution” facilitated by the web, and sees alt-metrics as critical to realising it. Key to his vision is the recognition that scholarship is a multidimensional activity, not confined solely to the publication of academic papers. As such, the assessment of scholarship should reflect this and integrate as many metrics as possible.
By harnessing the vast and diverse amount of scholarly activity on the web, alt-metrics promises to provide timely indications of impact and capture practices that have hitherto gone unrewarded.
Readermeter.org illustrates perfectly the immediate possibility of alt-metrics by providing real-time readership counts of literature harvested from online reference-management services. While citation counts can take time to accumulate, this website offers authors an instant impression of their impact.
Perhaps the clearest demonstration of the bold ambition of alt-metrics lies in the work of Samuel Arbesman. At the recent inaugural alt-metrics workshop, Arbesman, a research Fellow affiliated with Harvard University’s Institute for Quantitative Social Science, discussed his efforts to define, quantify and predict scientific discoveries. He claims to have roughly predicted the likely timing of the discovery of the first Earth-like planet, presenting what appears to be - at least in the case of funding decisions - the holy grail of metrics. His ambition serves to highlight just how narrow the approaches of old have been.
Of course, significant challenges will need to be addressed by the alt-metrics community - not least the issue of “gaming” (when academics seek to “play” the system). However, Priem believes that this problem could in part be solved by using an array of diverse metrics.
It will be more difficult still to displace existing measures. The journal impact factor, for all of the criticism, has persisted in part because of its simplicity.
Sceptics have warned that any move away from traditional peer-reviewed journal output will lead to an academic “Wild West”. With alt-metrics, however, the hope is that we as academics can regain control of the mechanisms by which we are assessed - and become the lawmakers of this new frontier.