The story of artificial intelligence development to date has been complex and often frustrating, with many false starts. A decade ago, I sat in a seminar room in Silicon Valley and listened to talks by two leading AI protagonists: one an elderly East Coast professor, the other a billionaire tech entrepreneur. The discussion that emerged was fascinating but disquieting – as it soon became clear that their views on the emerging potential of AI were based on wildly different experiences, assumptions and expectations.
Today, the term “artificial intelligence” has emerged from its fringe existence and is scarcely out of the headlines. It is frequently presented by commentators as being the great new hope of humanity – one that suggests a leap forward into a golden age. Yet fundamental questions, both moral and practical, remain about what our ultimate targets for AI should be, while research and investment press forward at a rapid pace. In a largely unregulated space, huge amounts of venture capital and other private funds are being ploughed into developments that may fundamentally change human existence – changes over which the general population has little control.
In this timely, vivid book, Stuart Russell sets out his views on where AI should (and should not) be taking us, and provides a new approach to helping us get there. His core analysis hinges on whether we are building an intelligence that will support the ambitions of humanity or whether such a creation will inevitably focus on reinforcing its own goals – and so ultimately out-evolve us and render human endeavour irrelevant. Naturally, science fiction has dealt with this set of challenges numerous times and, while the vision he provides is far from a road map, Russell uses some classic fictional themes as markers. Isaac Asimov’s Laws of Robotics, some of the backstory from Frank Herbert’s “Dune” universe and a number of recent blockbuster films all contribute to the wide-ranging discussion, which also embraces a slew of moral and economic issues.
Offering a new way forward for AI development, Russell opts for a well-argued middle ground. AI has huge potential in areas such as healthcare diagnostics, and it would clearly be unwise not to pursue this, but domains that encompass themes such as autonomous weapons and massive surveillance grids require more fundamental consideration across society. In proposing a set of Principles for Beneficial Machines, the author plots a potential strategy – but is clear about just how complicated the path ahead will be. Building AI systems that show benevolence to humanity, in a way that is morally acceptable and without bias, is likely to remain a significant and profound challenge far into the future.
In his concluding chapter, Russell warns against a possible endgame in which humanity loses its autonomy in the face of global AI that we have lost the ability to manage. Citing E. M. Forster’s chilling science fiction story The Machine Stops (1909), he reasons that the successful solution in ameliorating this risk is likely to be cultural, rather than technical. Such a movement will require engagement, education and deliberation – and in this important text Russell makes a key contribution to the coming debate.
John Gilbey teaches in the department of computer science, Aberystwyth University.
Human Compatible: AI and the Problem of Control
By Stuart Russell
Allen Lane, 352pp, £25.00
ISBN 9780241335208
Published 8 October 2019
POSTSCRIPT:
Print headline: Hard thinking about software
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login