Ethics committees ‘struggling to deal with’ AI research

Governance of data science research makes it difficult to highlight potentially harmful impacts that might arise, says ethics review report

十二月 13, 2022
TeSLA will use facial recognition software
Source: iStock

University ethics panels are unfit to deal with the complex challenges arising from emerging fields in research on artificial intelligence, a study has claimed.

With AI research now accounting for more than 4 per cent of all research published globally, the scope and nature of research involving data science has massively expanded in recent years but ethical considerations still largely centre on protecting the privacy and anonymity of research subjects, or obtaining consent from in-person participants – reflecting traditional “biomedical, human-subject research practices which operate under a researcher-subject relationship”, explains a new report by the Ada Lovelace Institute, the Alan Turing Institute and the University of Exeter.

Novel types of research undertaken by data scientists, however, should be considered in terms of a more nuanced “researcher-data subject relationship”, argues the report, Looking Before we Leap: Expanding ethics review processes for AI and data science research, which highlights how the harmful uses of research may only become clear once funded studies are well under way.

It cites several high-profile examples of controversial and unethical AI research being approved by research ethics committees, such as facial recognition algorithms that claim to identify homosexuality or criminality, and chatbots that can spread disinformation.

To prevent this type of AI research going ahead, the report recommends that institutions and researchers “engage more reflexively with the broader societal impacts of their research, such as the potential environmental impacts of their research, or how their research could be used to exacerbate racial or societal inequalities”.

Ethics committees should incentivise researchers to engage in these “reflexive exercises” throughout their research and scholars might be asked to submit a statement of potential societal impact – both good and bad – prior to submitting an article for peer review or a conference.

Universities should also seek to include more academics from across different disciplines in ethics reviews, while training of ethics review boards should be supported, with staff financially rewarded for their time on these bodies.

The report, which was funded by the Arts and Humanities Research Council, highlighted the need for the “proper consideration” of the potential risks of AI research, said Andrew Strait, associate director (research partnerships) at the Ada Lovelace Institute.

“Traditional oversight mechanisms, such as research ethics committees, are struggling to deal with the scope and nature of these AI and data science risks,” he said.

“Our research, however, concludes that with the right resources, expertise, incentives and frameworks in place, they can play an important role in supporting responsible AI and data science research.”

Niccolò Tempini, senior lecturer in data studies at the University of Exeter and a Turing fellow at the Alan Turing Institute, said that the report would help provide “direct, practice-oriented guidance on how to develop research ethics governance processes that are up to the challenge”.

“Our report aims to help satisfy some of the questions raised and guide local decision-makers in finding an ethically and organisationally sustainable research ethics strategy,” he said.

jack.grove@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

Teaching ethics to computer science students, I always stress the need to watch out for the 'unintended consequences' in what we do. When we touch on research ethics, as well as the rather biomedical approach, need for anonymity and concern not to waste research subjects' time that ethics panels normally consider, we also consider "Will this research bring the university into disrepute?" and "Should we be doing this at all?" All final year projects are given an ethical once over, and those students wishing to involve other people - normally for knowledge elicitation or user testing - are required to seek formal ethical approval for the studies they propose to do. They moan a bit, but it's a useful exercise in inculcating the need to consider the ethics of anything that you do, particularly considering how ubiquitous computers are these days!