View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
January 28, 2021updated 29 Jul 2022 10:07am

Can ethical commitments contain the risks of military AI?

Commitments by Western military powers to the ethical use of AI will come into conflict with the race for global AI supremacy.

By Laurie Clarke

More than 100 officials from 13 countries met online in mid-September, ostensibly to discuss how their respective militaries could adopt ethical AI. The meeting was barely covered outside a handful of defence media outlets, but by these it was hailed as unprecedented. Breaking Defense described it as “an extraordinary meeting that highlights how crucial artificial intelligence is becoming to the US and its allies”. 

The use of AI by the military, and the ethical problems it poses, have received less attention than other areas of AI ethics. (Photo by Joe Raedle/Getty Images)

The meeting was hosted by the Pentagon’s Joint AI Center (JAIC), which was set up in June 2018 to coordinate AI-related activity across the Defense Department. Present on an invitation-only basis were NATO members Britain, Canada, Denmark, Estonia, France, and Norway; non-NATO treaty allies Australia, Japan, and South Korea; and de-facto US allies Israel, Finland and Sweden.

Delegates kicked off a new AI Partnership for Defense that will “serve as a recurring forum for like-minded defence partners to discuss their respective policies, approaches, and challenges in adopting AI-enabled capabilities”, according to JAIC’s website

Although a transcript of the meeting hasn’t been published, “ethics” was its stated focus. Then-head of strategy and policy at JAIC (now senior programme manager at Amazon) Mark Beall, told Breaking Defense: “We’re really focused on, right now, rallying around [shared] core values like digital liberty and human rights… international humanitarian law.”

But another goal was evident too. Eventually, the group is envisioned to become an international coalition for AI-driven warfare. Nand Mulchandani, chief technology officer at the Joint Artificial Intelligence Center, has stressed that JAIC’s network of data partners offers an advantage for building AI systems at scale, and increases the chances of the US and its allies emerging as global leaders in AI.

Beall told Breaking Defense: “My personal goal for this forum is to create a framework for data sharing and data aggregation [and collaboration on] very powerful, detailed algorithms.”

The militarisation of AI

The use of AI by the military, and the ethical problems it poses, have received less attention than other areas of AI ethics. This is despite concerted moves by military forces around the world to develop and implement the technology. 

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

With the largest military budget in the world – totalling almost as much as the next eight countries combined – the US is leading the charge. Since 2014, the DoD has become increasingly invested in the concept of algorithmic warfare. US defence-related AI spending was estimated to total around $4bn in 2020.

Flagship projects like the $10bn Joint Enterprise Defense Infrastructure (JEDI) cloud project – that Amazon and Microsoft tussled over last year – are also driven by an AI imperative. The cloud computing project will connect US military personnel all over the world to a common system. The pooling of data from different military systems, and the training of algorithms on this data, will help propel the US army into an automated future. 

“This is not merely an AI issue,” US Army Lt. Gen. Jack Shanahan told Breaking Defense. “It is also about joint all-domain warfighting, taking advantage of emerging technologies to develop new operating concepts for a kind of warfare that will look completely different than what we’ve experienced for the past 20 years.” For now though, delays to rolling out JEDI mean that JAIC is relying on a new Air Force programme cloudONE instead.

Taking the US’s lead, the UK has begun investing more heavily in its own AI programmes. In 2018, the £1.8bn Modernising Defence Programme was announced, part of which involved the establishment of a flagship AI lab. The UK’s recently announced £16.5bn four-year defence spending plan is to be focused on investment in cutting-edge technologies that can “revolutionise” warfare, according to Prime Minister Boris Johnson. Of this, £5.8bn is directed towards military R&D, as well as the building of a new R&D centre focused on AI. 

What are the current military applications of AI?

Killer robots and “slaughterbots” (lethal AI-driven drones) dominate the public imagination when thinking about military AI. As such, a group of 116 technology executives urged the United Nations to issue a blanket ban on autonomous weapons in 2017. Their open letter described autonomous weapons as the ‘third revolution’ in warfare after gunpowder and nuclear missiles. 

David Danks, head of philosophy and psychology at the Dietrich College of Humanities and Social Sciences at Carnegie Mellon University, says that it’s not clear that truly autonomous weapons systems will be launched any time soon. He believes that, in any case, there will be tremendous pressure to keep a human in the loop. “People get very nervous if they don’t know that there’s going to be a human to blame,” he says. “There are good psychological reasons that they respond that way.” 

One idea is to assign legal liability to whoever signs off on a particular weaponised AI system, with the threat of criminal sanctions if the system ends up committing inadvertent harm. “It enables militaries to maintain largely the same command and control architectures that they already have in place,” says Danks, who has advised the Defense Innovation Board (DIB), a US army advisory body, on AI ethics. “And largely maintain the same legal systems for assigning culpability and responsibility when things go wrong.”

However, scholars such as Frank Pasquale, the author of New Laws of Robotics: Defending Human Expertise in the Age of AI, have noted that those responsible for drone strikes that have levelled hospitals, schools and wedding parties have not faced legal action, throwing doubt on whether there would be a realistic threat of consequences for drone system developers. 

AI-powered military decisions

The more immediate uses of AI by the military are more prosaic. These include addressing questions such as, “How do you schedule logistics when you’re trying to move a battalion across an ocean? How should you do it efficiently?” says Danks. 

Intelligence surveillance reconnaissance (ISR) is the area of defence where the AI is having the greatest impact, Danks explains. “It’s the use of AI to sift through the enormous quantities of data that are now being connected in this increasingly sensor-rich world, to try to have an understanding of what’s happening in the environment – whether it’s the local battlefield or broader global forces at work.”

Other data-heavy AI applications include processing hours of drone footage, and “predictive maintenance” programmes designed to signal when vehicle parts are likely to fail.

Instead of entirely automated systems, Danks predicts that “humans will be making decisions, but increasingly removed from the actual battlefield”. But the information they would use to make those decisions will increasingly come from AI-powered analysis. This makes questions about the fairness of AI systems all the more pressing, as they may determine who lives and dies.

Establishment military theorists claim that AI has the potential to be more discerning and less fallible than humans. This is the reasoning underpinning the claim from a new report for Congress that the US has a “moral imperative” to develop AI weapons. Produced by the government-appointed National Security Commission on Artificial Intelligence (NSCAI), which is led by former Google chairman and military tech evangelist Eric Schmidt, the report argues the US should not agree to ban the development of autonomous weapons powered by AI software.

But critics point out that AI systems are not infallible. They’re biased, limited by the quality of the data upon which they’re built, and have a great potential for error. Pasquale notes that even the banning of lethal autonomous weapons has troubling implications in the context of the US’s ‘forever wars‘. For example, the attempt to use data to find ‘legitimate’ targets – what the US army’s controversial Project Maven does – could end up becoming invasive surveillance of civilians in efforts to discover whether or not they are a ‘combatant’.

And a sky crowded by surveillance drones that can deal out non-lethal actions blurs the boundaries of war. “Such a controlled environment amounts to a disturbing fusion of war and policing, stripped of the restrictions and safeguards that have been established to at least try to make these fields accountable,” Pasquale writes. 

Military thinking on AI ethics

In its embrace of AI technologies, the US military has also made a commitment to ethics – at least ostensibly. In early 2020, JAIC appointed its first head of AI ethics policy, Alka Patel, and around the same time, the DoD officially adopted a set of AI principles proposed by the Defense Innovation Board.

The principles are broad. They include a commitment to being responsible (exercising appropriate levels of judgment and care over AI deployment); equitable (minimising the scope for unintended bias); traceable (AI capabilities to be developed such that relevant personnel possess an appropriate understanding of the technology); reliable (safety, security, and effectiveness of such capabilities will be subject to testing) and governable (with sufficient oversight and the ability to disengage malfunctioning systems). 

But these commitments could come into conflict with the US’s desire, evident in its 2019 AI strategy, to outflank China and Russia in the race for military adoption and development of AI. In a press release accompanying the strategy, the DoD’s chief information officer noted that Russia and China had made significant investments into AI, saying that the US and its allies “must adopt AI to maintain its strategic position to prevail on future battlefields and safeguard a free and open international order”. 

Indeed, the need for ethical AI is often explicitly pitted against the race against China. The NSCAI report, which contains 351 mentions of ‘China’, repeatedly claims that Beijing and Moscow are unlikely to constrained by the need to develop ethical AI. When speed is prioritised above all else, this could provide useful cover for sacrificing ethics to keep pace.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU