View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Leadership
  2. Digital Transformation
February 16, 2023updated 17 Mar 2023 8:40am

US government proposes guidelines for responsible AI use by military

With AI defence spending on the rise, Washington says it wants to ensure automated systems are deployed ethically.

By Matthew Gooding

The US government has proposed a set of guidelines for the way artificial intelligence (AI) and automated systems should be used by the military, and says it hopes its allies will sign up to the proposals. The news comes as attempts to regulate the use of AI in Europe appear to have hit a stumbling block, with MEPs having reportedly failed to reach an agreement on the text of the Bloc’s upcoming AI act.

AI is becoming an increasingly important weapon for governments. (Photo by Gorodenkoff/Shutterstock)

Unveiled at the Summit on Responsible AI in the Military Domain (REAIM 2023), taking place in The Hague, Netherlands, this week, the snappily titled Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy is the US’s attempt to “develop strong international norms of responsible behaviour” around the deployment of AI on the battlefield and across the defence industry more generally.

How the military can use AI responsibly

Artificial intelligence is an increasingly important part of defence strategies. In the US, the Department of Defence invested $874m in AI technology as part of its 2022 budget, while last year the UK Ministry of Defence unveiled its defence AI strategy, a three-pronged approach which will see it working closely with the private sector. The technology can be deployed as part of semi-autonomous weapons systems such as drones, and also to help military planning and logistics operations.

This increased focus, and the growing power of AI systems, means governments have a responsibility to ensure they are used ethically, within the boundaries of international law. In 2020, the US government convened the AI Partner for Defense, an initiative involving 100 officials from 13 countries, looking at the responsible use of automated systems.

Today’s declaration, presented at the summit by Bonnie Jenkins, US undersecretary of state for arms control and international security, is being billed as a further attempt to gain commitments from governments about how AI technology will be used in the military.

“The aim of the declaration is to build international consensus around how militaries can responsibly incorporate AI and autonomy into their operations, and to help guide states’ development, deployment, and use of this technology for defence purposes to ensure it promotes respect for international law, security, and stability,” a spokesperson for the US Department of State said.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

It consists of a series of non-legally binding guidelines describing “best practices for responsible use of AI in a defence context,” the spokesperson added. These include ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, and that high-consequence applications undergo senior-level review and are capable of being deactivated if they demonstrate unintended behaviour.

“We believe that this Declaration can serve as a foundation for the international community on the principles and practices that are necessary to ensure the responsible military uses of AI and autonomy,” the spokesperson added.

EU AI act hits the buffers?

Meanwhile in Brussels, development of the EU’s landmark AI act, which will regulate automated systems across the continent, appears to have hit a snag.

It had been hoped basic principles for the act, the text of which is expected to go before the European Parliament before the end of March, would be agreed at a meeting today. But after five hours of talks, an agreement had not been found, according to a report from Reuters, which cites four people familiar with the discussions.

The legislation is expected to take a “risk-based” approach to AI regulation, meaning systems which pose a high threat to the safety and privacy of citizens will face stringent controls, while more benign AI systems will be allowed to operate with few restrictions. There has been much speculation that generative AI chatbots like ChatGPT will be classed as high risk, meaning their use could be banned in Europe because of their ability to generate hate speech, fake news, and other dangerous material such as malware. EU commissioner Thierry Breton said last week the rules would include provisions for generative AI following the success of ChatGPT.

An EU source told Reuters that discussions are ongoing over the bill. “The file is long and complex, MEPs are working hard to reach an agreement on their mandate for negotiations,” they said. “However there is no deadline or calendar on the next steps.”

Once the text of the bill has been established, it must clear the European Parliament before going to EU member states, which can propose amendments to the legislation before it is made law.

Read more: This is how GPT-4 will be regulated

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU