Using AI to assess researchers ‘could improve transparency’

Paper says machine learning method to shortlist candidates for a role reveals an employer's priorities   

March 13, 2021
Futuristic robot searching location on the globe.
Source: Istock

Using artificial intelligence to select who is most suitable for a research post may sound like a dystopian nightmare to some academics. But a new paper that presents a method for this exact purpose argues that it could actually force research managers to be completely transparent about how they select candidates for a role.

The article explains how the AI subfield of machine learning was used to compare the profile of scientists at a top Brazilian research group with CVs on Brazil’s Lattes database of researchers.

By comparing the CVs, the AI was not only able to filter suitable candidates but also learn and define the kind of attributes, such as publication histories or membership of committees, that made them suitable.

The paper, published in the journal Scientometrics, says that because these attributes are then clearly stated, such a method would be more transparent than humans selecting candidates without explaining their own criteria.

It would also meet various principles laid out in the Leiden Manifesto on the responsible use of research metrics, such as being open about how researchers are assessed and in what context.

Rosina Weber, associate professor in information science at Philadelphia-based Drexel University, who co-authored the paper, said that using such a method would reveal how those doing the hiring were selecting people.

This was because by giving the AI examples of what they considered to be the “best” CVs, they were “implicitly” agreeing to “what facets in those CVs they consider more important even though it is not clear to them”. 

She added that it meant managers had to “figure out with clarity what their standards are of hiring because if they don’t do that clearly they will never achieve the goals of the Leiden manifesto”.

Dr Weber, who said that the paper was based on work that her co-author, Kedma Duarte of Goiás State University, originally did for a PhD, said that interviews may still be needed to assess social attributes once a field of candidates had been whittled down by the AI.

But using AI to initially filter thousands of candidates was better than some of the current ways that companies used to shortlist CVs, such as keyword searches, she said.

She also cautioned that using AI to automate the assessment of researchers would still reflect the biases of those using it, although at least that should be transparent.

“What the AI does is obey the goals of the customer. If the customers only want to hire people that excel in publications in high impact journals [for instance], the AI is not going to go against that,” Dr Weber said.

simon.baker@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

Sounds interesting. But any recommendation or classification system can only be as good as the data provided. It would in this case likely have a conservative bias. Conservative not in the political sense but in the sense of putting a premium on mainstream and mediocrity while punishing the next Nobel prize winner for being innovative and going different ways. Whether a selection committee composed of mediocre scholars would fix this is debatable, though.

Sponsored