Stanford aims to keep its brainchild AI on straight and narrow

New institute intends to marshal all academic fields to nudge artificial intelligence and its uses to more positive place

May 29, 2019
robot_head_alamy
Source: Alamy

Since computing legend John McCarthy arrived in the early 1960s, Stanford University has been a global leader in developing artificial intelligence – the use of massive computer processing power to replicate or surpass human brain capacity.

As that goal now comes within sight, Professor McCarthy’s successors are allocating real resources to allay long-held public fears that it could all go badly wrong.

Their chief mechanism is the new Institute for Human-Centered Artificial Intelligence. With faculty drawn from every department within every school at Stanford, HAI hopes to provide holistic grounding to the world-renowned Stanford Artificial Intelligence Laboratory, a prodigious assembly of computer scientists founded by Professor McCarthy in 1962.

Ambitions for HAI are broad, said one of its co-directors, John Etchemendy. It plans to help every academic field build AI into its work. It plans to envisage and predict AI’s impact across human society, economics and politics. It plans to suggest policy positions. It plans training sessions for lawmakers, journalists, lawyers and many other professionals. And it plans to encourage a global network of similar efforts.

Professor Etchemendy – a philosopher, mathematical logician and long-time Stanford provost – told Times Higher Education that he was not one to fuel public fears of killer robots driven by cold logic to turn on their human creators.

But he also does not downplay the implications of artificially multiplying a mysterious force – human thought and action – that already is fully capable of immense good and evil.

“We’re not going to be able to control every use of AI,” Professor Etchemendy said. “But I do think that there are appropriate ways to nudge in the right direction, and hopefully move the trend to a more positive place and move the thinking to a more positive place. And I think that’s all you can hope for.”

From his base in Silicon Valley, Professor Etchemendy does not have to go far for lessons on technology’s dark side. Popular villains include Facebook and Twitter, which began as places for sharing friendly banter and now stand accused of playing central roles in the overthrow of democratic norms.

It was over his backyard fence that Professor Etchemendy got the idea for HAI, which grew out of a suggestion from his neighbour, Fei-Fei Li, Google’s chief AI scientist. Professor Li, who has since returned to the Stanford campus to join Professor Etchemendy in co-leading HAI, would get her own taste of computer-generated controversy when she was found to be counselling her corporate colleagues to avoid the term “weaponised AI” when discussing Google’s work helping the Pentagon with drone-guided bombing.

While popular culture has long expressed fears that superhuman brains might eventually come to view ordinary humans as inferior and unworthy – such as HAL in the 1968 movie 2001: A Space Odyssey – Professor Etchemendy said actual AI technology had been too primitive to warrant such concern until just the past few years.

While noting that AI scholars were not concerned about rebutting science fiction, Professor Etchemendy said that the fears in such dystopian visions about “artificial general intelligence” did not seem realistic. Far more deserving of attention, he said, were real-world problems such as parole boards making criminal justice decisions informed by algorithms that have biases built so deeply into software that they cannot be seen, and computer systems that affect human survival making sudden unpredictable shifts in behaviour because of unforeseen quirks in the data on which they were trained.

AI research within the corporate world has actually been quite limited and incremental, Professor Etchemendy said. A university such as Stanford has the potential to be far more transformative in applications once it begins, as HAI intends, to integrate AI with its vast subject-specific expertise, he argued.

Professor Etchemendy said he was hopeful that companies would not then exploit such advances with more focus on profits than on human well-being. “We’ve had a fair bit of interaction with Microsoft, for example,” he said, “and I’ve been very impressed by the sincerity of the people that I’ve talked to, about wanting to figure out what is the right use of this technology, when should they say ‘no’ to a project.”

HAI also plans to recognise the even greater potential for unintended consequences, much of it centring on computers amplifying existing human biases. Yet the institute already has faced criticism for a seeming lack of diversity, based on photos of its top leadership that suggested an overwhelmingly white and male composition.

Professor Etchemendy, perhaps ironically, blamed a computer for causing that misperception. While it is true that the field of computing lacks women and minorities, he said, a website glitch had led to a mix-up in photos that made the imbalance at the institute look even worse.

Professor Li brings to HAI a track record as founder of efforts to integrate and diversify computing, including AI4ALL, a teaching and mentoring initiative aimed at attracting and keeping women and minorities working in AI.

At the same time, initiatives such as HAI must also compete with industry for talent. In this, Stanford fares well, Professor Etchemendy said, attracting experts who embrace the freedom to explore problems and the challenge and responsibility of teaching the next generation despite the far higher salaries and greater raw resources to be found at many companies.

“At Stanford, I think we’ve reached a good steady state” with companies, which realise the benefits of cooperation, he said. “It would be an absolute disaster if industries took all of the AI talent out of universities – it would be a disaster, even for them.”

paul.basken@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The entanglement of the university and tech worlds faces increased scrutiny following the Cambridge Analytica scandal. Could joint positions in industry and academia offer a workable and ethically defensible way forward? David Matthews reports

Reader's comments (2)

The new EPSRC Centre for Doctoral Training in Enhancing Human Interactions and Collaborations with Data and Intelligence Driven Systems at Swansea University is also addressing the 'need to put people at the heart of research and innovation for data-driven and intelligent technologies' www.swansea.ac.uk/science/epsrc-centre-for-doctoral-training/
The problem is not "Human-Centered Artificial Intelligence", problem is "Human Like Artifitial Intelligence". Contemporary approaches to this problem are very simplistic and can be described as “how Human can react on this challenge”. But different approach can be assumed as “how Human creates his/her decision”. And what is a difference between conclusion and decision?

Sponsored