Could metrics help to make better selections for prestigious awards?

Prize committees should re-examine nominees with below-average scientometric scores, says Adrian Furnham

九月 13, 2022
Man carried on shoulders above crowd to illustrate Who’s a jolly good fellow?
Source: Alamy (edited)

Most UK-based academics are, rightly, very impressed by people with the letters FRS or FBA after their name. Being made a fellow of the Royal Society or the British Academy is often considered the greatest accolade an academic can receive: a recognition of really significant contributions to their discipline.

At the same time, however, many of us have sometimes been surprised both by those elected and those not elected. Why has the famous and highly productive academic been overlooked? Why has the rather obscure and less productive person been put forward? Poor judgement? Academic pettiness? Politics? Or are we missing something?

Nor do the doubts arise only in the case of the Royal Society and the British Academy. Many organisations around the world and across the disciplines award medals, prizes and certificates to their members for meritorious work. But the mechanisms for choosing recipients are very similar. Usually, there is a call for nominations, on the basis of which a shortlist, based on frequency counts, is drawn up. Then decisions are made by a small committee.

The academic world seems always to stress the importance of peer review. But while honesty can at least theoretically be guaranteed in the case of papers and grant applications by making both authors/applicants and reviewers anonymous, this is impossible with prizes. Moreover, peer reviewers of papers tend to be experts in the specific subfield in question; how can we make informed choices across whole disciplines and beyond?

Some empirical studies have suggested that peer nominations are flawed in interesting ways. Distorting forces include the role of societies, journals and “old boy” networks, while ideology – that is, support for a theory or a method rather than scientific knowledge – appears to determine many ratings. There may be an interesting PhD in probing all this – including whether the arts and social sciences are more prone to ideology in these decisions compared with the sciences.

My experience of awards committee meetings is that they can be highly charged, as people seek to support their nominee for a variety of reasons. I am as guilty as the next person. But all this raises the question of whether there might be a better way to make decisions. Would examining assorted metrics on individuals’ published work allow us to bypass the cabalistic, self-serving nature of academic research networks?

I have been in awards committee meetings where people who are less committed to a personal agenda have suggested that using scientometrics might add “a little objectivity” to the discussion. This leads to a tirade of invective against any measures, often from a member very poorly informed about scientometrics. But the fact is that scientometric data are becoming ever more comprehensive and reliable. A number of serious journals are dedicated to examining them, and many papers have been published about top scholars’ impact in various disciplines.

That said, metrics’ own limitations and biases have not gone away entirely; multi-authored papers from a particular lab are just one of the hard cases. And questions remain about the best and fairest metrics to use to assess impact, fame, contribution and, indeed, longevity, recognising that these are very different criteria. These questions beset selection and promotion committees, too, of course: what data to use to best inform wise decision-making?

This is a genuine intellectual puzzle. One way to approach it would be to examine how scholars who were given “lifetime awards” in the past are now remembered. It is often shocking to look back 20 or 30 years and see who was elected to very august bodies and who was not.

It would also be interesting to look at the scientometrics of people nominated for prestigious prizes, across several disciplines. The interesting cases would be those individuals whose numerical data fell around or even below the average for that discipline or branch of it. How could that be explained? It may be that although they have produced few papers, each was a “work of genius” that “changed the field”. These individuals do exist in all disciplines. But what of the possibility that nomination is primarily due to various types of political lobbying, using a variety of reinforcements?

In my opinion, the system of peer nomination should be supplemented with some data and a little research. When nominations have been made, someone should be appointed to inspect the scientometric data for the disciplines represented by the august body awarding the prize or electing the new fellow. Many in my world, for instance, will have seen Research.com’s list of top psychologists, based on various metrics. More and more of these data sets are available.

If the proposed fellow has a “decent” scientometric number, all is well. If not, the question becomes: “Why not?” By what criteria are they worthy if they have relatively few citations by their peers?

Equally, the selection body might want to consider those who excel in scientometric terms but who have not been suggested for election. Why have they been overlooked?

This would do no harm. It could even lead to better decision-making. Discuss.

Adrian Furnham is professor in the department of leadership and organisational behaviour at the Norwegian Business School in Oslo.

后记

Print headline: Who’s a jolly good fellow?

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

尽管有越来越多竞争性奖项出现,但正如某位获奖者所言,“每个学科的年轻学生都曾幻想荣获诺贝尔奖”。目前,成千上万的人涌入研发新冠病毒疫苗的浪潮之中,那么颁奖委员会是否会最终放弃对某一位科学天才的偏执呢?杰克·格鲁夫(Jack Grove)如是说

8月 6日