Skip to content

By Roger Highfield on

AI can’t be trusted with nuclear weapons 

For an event in the IMAX last night, Roger Highfield discussed how much we trust computers with Professor Paul Brenner, a US veteran who works at the Center for Research Computing, University of Notre Dame, Indiana. 

When a U.S. official called on Russia and China to join the United States in declaring that decisions to deploy nuclear weapons would only be made by humans, not artificial intelligence, it made headlines worldwide. 

Paul Dean, a State Department arms control official, said that the U.S. had made a commitment to keep nuclear weapon control firmly in human hands, adding that France and Britain had made pledges along similar lines. 

Last night, for an IMAX event entitled Can We Trust Computers?, chaired by Timandra Harkness, author of Technology is Not the Problem, I talked to Prof. Brenner about why it is important in what he calls ‘mission critical scenarios’ to ‘put a human in the loop’. 

He said that there was a reluctance to rely on ‘complex automation,’ such as AI, since its workings may be beyond human comprehension and, moreover, it may be able act too quickly ‘for a human to intercept it’. 

Model of the ballistic missile carrying nuclear-powered submarine H.M.S. ‘Resolution’, 1966, by Paul Bowyers, England, 1981, model showing its nuclear armament.

Errors could be due to a bug he said but, equally, like any technology, AI can as easily be abused as it can be deployed for the common good. 

When it comes global security, one example can be found in a recent report commissioned by the UK government, from the Joint Intelligence Organisation and Government Communication Headquarters (GCHQ), authored by the independent Centre for Emerging Technology and Security  at The Alan Turing Institute.  

The report concluded that AI must be viewed as a valuable tool to support national security decision-makers in Government and intelligence organisations, since AI tools can identify patterns, trends, and anomalies beyond human capability. 

Prof. Brenner said there is an opportunity for AI to help with decision support, when it comes to a complex problem or scenario, and brainstorm ‘what-if scenarios’, but, again, he believes that the experience of people is crucial to make final or key decisions.  

The use of AI for misinformation is the greatest worry for Prof. Brenner, who believes that elections and public opinion are the bedrock of any nation’s ‘defence’.  ‘This is the one that keeps me up at night,’ he said. 

Defending a free society and free opinions, including those that are not necessarily shared by those in control, can be undermined by ‘bots, agents and AI that uses sophisticated methods to spin, dissemble and amplify the existing concerns of the audience,’ he said.  

Waterline model of a nuclear powered submarine in sea diorama with two figures on the bridge. From Europe, 1940-1976.

When it comes to the ‘echo chamber’ effect that boosts polarisation and extremism in social media, for example, there is a high possibility this will get worse, he said, as sophisticated, believable chatbots can scale up to be deployed in huge numbers to influence online debates. 

The event was supported by the UKRI Engineering and Physical Sciences Research Council and organised by Prof. Peter Coveney of University College London, who has done research to show how the digital nature of computers – and the diminished set of numbers on which they depend – can lead to errors, has written about the issues more generally with me, and leads a consortium, called SEAVEA, that aims to increase trust in computers. 

With Prof. Coveney were Dr Alessandra Vizzaccaro of the University of Exeter, who was discussing AI control of fusion reactors and Prof. Tim Palmer of the University of Oxford, who considered how we can increase trust in computers when they are used for weather and climate forecasting. The use of AI in this kind of forecasting is rising. 

Watch the full conversation between Roger Highfield and Professor Paul Brenner below.