The US government has proposed a set of guidelines for the way artificial intelligence (AI) and automated systems should be used by the military, and says it hopes its allies will sign up to the proposals. The news comes as attempts to regulate the use of AI in Europe appear to have hit a stumbling block, with MEPs having reportedly failed to reach an agreement on the text of the Bloc’s upcoming AI act.
Unveiled at the Summit on Responsible AI in the Military Domain (REAIM 2023), taking place in The Hague, Netherlands, this week, the snappily titled Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy is the US’s attempt to “develop strong international norms of responsible behaviour” around the deployment of AI on the battlefield and across the defence industry more generally.
How the military can use AI responsibly
Artificial intelligence is an increasingly important part of defence strategies. In the US, the Department of Defence invested $874m in AI technology as part of its 2022 budget, while last year the UK Ministry of Defence unveiled its defence AI strategy, a three-pronged approach which will see it working closely with the private sector. The technology can be deployed as part of semi-autonomous weapons systems such as drones, and also to help military planning and logistics operations.
This increased focus, and the growing power of AI systems, means governments have a responsibility to ensure they are used ethically, within the boundaries of international law. In 2020, the US government convened the AI Partner for Defense, an initiative involving 100 officials from 13 countries, looking at the responsible use of automated systems.
Today’s declaration, presented at the summit by Bonnie Jenkins, US undersecretary of state for arms control and international security, is being billed as a further attempt to gain commitments from governments about how AI technology will be used in the military.
Today, I announced the U.S. framework for a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The Declaration is a first step towards building international consensus on responsible State behavior in this area. https://t.co/hTgrJsZRND
— U/S of State for Arms Control & Int’l Security (@UnderSecT) February 16, 2023
“The aim of the declaration is to build international consensus around how militaries can responsibly incorporate AI and autonomy into their operations, and to help guide states’ development, deployment, and use of this technology for defence purposes to ensure it promotes respect for international law, security, and stability,” a spokesperson for the US Department of State said.
It consists of a series of non-legally binding guidelines describing “best practices for responsible use of AI in a defence context,” the spokesperson added. These include ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, and that high-consequence applications undergo senior-level review and are capable of being deactivated if they demonstrate unintended behaviour.
“We believe that this Declaration can serve as a foundation for the international community on the principles and practices that are necessary to ensure the responsible military uses of AI and autonomy,” the spokesperson added.
EU AI act hits the buffers?
Meanwhile in Brussels, development of the EU’s landmark AI act, which will regulate automated systems across the continent, appears to have hit a snag.
It had been hoped basic principles for the act, the text of which is expected to go before the European Parliament before the end of March, would be agreed at a meeting today. But after five hours of talks, an agreement had not been found, according to a report from Reuters, which cites four people familiar with the discussions.
The legislation is expected to take a “risk-based” approach to AI regulation, meaning systems which pose a high threat to the safety and privacy of citizens will face stringent controls, while more benign AI systems will be allowed to operate with few restrictions. There has been much speculation that generative AI chatbots like ChatGPT will be classed as high risk, meaning their use could be banned in Europe because of their ability to generate hate speech, fake news, and other dangerous material such as malware. EU commissioner Thierry Breton said last week the rules would include provisions for generative AI following the success of ChatGPT.
An EU source told Reuters that discussions are ongoing over the bill. “The file is long and complex, MEPs are working hard to reach an agreement on their mandate for negotiations,” they said. “However there is no deadline or calendar on the next steps.”
Once the text of the bill has been established, it must clear the European Parliament before going to EU member states, which can propose amendments to the legislation before it is made law.