Politics & Society
AI: It’s time for the law to respond
Efforts to control the weaponisation of new technologies like AI are fragmented, but the UN hosts an existing disarmament framework that could bring these efforts together
Published 22 March 2020
The word ‘disarmament’ may, for many people, conjure images of 1970’s demonstrations when crowds of long-haired hippies hit the streets brandishing signs against nuclear weapons.
Indeed, talk of disarmament can now seem just as out of date as 1970s street fashion.
World peace looks as far away as ever. There is no shortage of demand for weapons given the fighting in Yemen, Syria and Afghanistan – just to name a few conflict zones.
Both the United States’ and Russia’s claims that new hypersonic nuclear missiles are difficult to intercept continues to stoke the ongoing nuclear arms race.
President Trump, meanwhile, has reversed the Obama administration’s ban on the US military using landmines.
In January of this year, the Doomsday Clock, established by the Bulletin of the Atomic Scientists to assess the risk of weapons to the world, was moved to 100 seconds to midnight. This is the closest it has been to midnight since it was established in 1947.
It’s hard to imagine, then, that any meaningful conversations about the regulation of weapons could now take place.
But given the rising stakes and the ongoing advance of technologies in robotics, cyberspace, automation and artificial intelligence, international discussions on regulating the weaponisation of new technology are essential.
Politics & Society
AI: It’s time for the law to respond
The problem is, the work that is going on in this area is often siloed, taking place across different levels of both industry and government, and lacking adequate weight.
Far from being out of fashion, I’d argue that the existing international disarmament framework of treaties and United Nations forums is being under-utilised when it could instead be the focal point for unifying efforts to control the threats posed by new technology.
A key difficulty in controlling the weaponisation of technology is that in almost every case the weapon is already developed and used well before ethicists, human rights experts and lawyers have had a chance to contemplate its implications.
A prominent exception to this was the 1995 prohibition on permanently blinding lasers.
This prohibition came in by multilateral treaty before actual use on the battlefield. In contrast, most other banned weapons – biological, chemical, nuclear and even landmines – were only prohibited years after they were first used.
The present risk is that the same mistakes will be made with cyberspace, automation and AI.
We need to proactively contemplate the possible misuse, dual use and unintended consequences of new technologies. The same cyber tools that can improve the efficiency of critical infrastructure, can also be used to maliciously to shut down that same infrastructure.
The full extent of the dangers posed by these technologies aren’t always immediately apparent to non-experts or even the experts themselves. This situation can be caused by what sociologists call ‘positive asymmetry’.
Politics & Society
Should we work with or turn off AI?
In the history of science, humans have tended to assume the most positive outcomes – biological applications will heal diseases, chemical interventions will fight cancer, AI will solve the global environmental crisis.
And in many cases, this is indeed true.
Equally, biology can lead to highly transmissible fatal diseases, chemicals can be used to poison individuals and major companies using AI are now devouring energy in quantities that directly contributes to environmental harm.
In fact, cultures often shun or punish those who raise potential downsides or risks. As a consequence, there is a tendency to focus only on the positives.
This is why it is so important that the dangers of new technologies are put on the international agenda as early as possible.
Some of the current industry-led efforts in this area include the Institute of Electrical and Electronics Engineers (IEEE) which is moving forward on standards and initiatives on ethics in autonomous systems.
The Global Commission on the Stability of Cyberspace has also produced guidelines for the peaceful use of cyberspace.
Microsoft is leading a Tech Accord, as well as founding the CyberPeace Institute and the Siemens-led Charter of Trust calls for certain values to be upheld. Each of these are good examples led by the private sector, but none are universally endorsed.
Health & Medicine
Protecting Cambodia’s landmine detection dogs
The Organisation for Economic Cooperation and Development has its own Principles of AI, while other efforts including different actors include the Partnership on AI, the European AI Alliance, and the International Telecommunications Union ‘AI Commons’. The International Standards Organisation is also contemplating multiple new Standards on AI.
Finally, French President Emmanuel Macron’s Paris Call for Trust and Security in Cyberspace, announced in November 2018, is gathering momentum.
In summary, there are hundreds of proliferating ethics strategies, both government and corporate, attempting to contemplate the impact of AI and cyberspace.
Yet all year round in Geneva, experts meet under the auspices of the UN to discuss and negotiate existing treaties like the Chemical Weapons Convention (CWC), the Biological Weapons Convention (BWC), and the awkwardly named Convention on Certain Conventional Weapons (CCW).
This disarmament framework could be the umbrella under which to bring together all the overlapping issues around technological weaponisation – data security, national security, concepts of privacy and cybersecurity.
The disarmament community has an obligation to help countries negotiate norms and potentially new laws. The UN has multiple mechanisms through which this may take place.
Disarmament isn’t only about controlling the creation and distribution of weapons, it is also about creating common understandings, norms and laws to ensure that there are limits to how weapons are used – for the benefit of all.
Sciences & Technology
Why does artificial intelligence discriminate?
And the risks of new technologies go beyond obviously dangerous weapons.
After all, computer technology was central to the efficiency with which Nazi Germany was able to identify Jews and massacre them in the Holocaust. For example, the emerging use of AI and facial recognition in areas like recruitment, security and welfare, risks embedding biases and discrimination.
Disarmament provides an opportunity to unify efforts against the misuse of technology, but it requires strengthening so that all the relevant experts can be at the table when the international community, as well as industry, contemplate these issues.
Preventing great harm is what disarmament is still good for. The world urgently needs it.
Banner: A NATO cyber warfare exercise. Jaap Arriens/NurPhoto/Getty Images
We acknowledge Aboriginal and Torres Strait Islander people as the Traditional Owners of the unceded lands on which we work, learn and live. We pay respect to Elders past, present and future, and acknowledge the importance of Indigenous knowledge in the Academy.
Read about our Indigenous priorities