Microsoft’s cloud platform Azure is making large language AI models available to US federal agencies for the first time. GPT-4 and other tools from ChatGPT creator OpenAI will be available on the Azure Government Cloud platform. Microsoft says it keeps all government data off the public internet and doesn’t use it to train models.

Microsoft already makes OpenAI's large language models available commercially, now it is open to government users (Photo: Tada Images / Shutterstock)
Microsoft already makes OpenAI’s large language models available commercially, now it is open to government users. (Photo: Tada Images/Shutterstock)

Moving OpenAI models to the government cloud required Microsoft to ensure they comply with the regulatory standards for classification and security. This includes meeting government requirements for managing sensitive data. 

To achieve the higher degree of security, Microsoft developed a new cloud architecture that would allow federal agencies to access the large language models through REST APIs. Users will be able to adapt models to specific tasks. This includes using them for content generation, summarisation, semantic search and natural language coding, the company says.

“Developers can use Azure OpenAI Service to access pre-trained GPT models to build and deploy AI-enabled applications more quickly and with minimal effort,” explained Bill Chappell, chief technology officer, strategic missions and technologies at Microsoft in a blog post.

This includes accelerating content generation by automatically creating responses based on project inquiries – which, according to Chappell will “help reduce the time and effort required for research and analysis”. He says it will also allow development teams to streamline content summarisation processes by generating summaries from logs, articles and reports automatically.

Grounding and real-world data

The platform will allow government users to use grounding data to draw responses from prompts. This is data based on trusted internal sources, so the model is drawing from real-world information in real-time and so is less likely to hallucinate answers.

Azure’s government cloud mirrors the commercial network, which also then allows for routing and transport capabilities from the secure government cloud out to the Microsoft network and the wider internet whilst still limiting exposure.  It means that government users can access the capabilities of the commercial network but with added protections and security.

“Through this architecture, government applications and data environments remain on Azure Government,” said Chappell. “Only the queries submitted to the Azure OpenAI Service transit into the Azure OpenAI model in the commercial environment through an encrypted network and do not remain in the commercial environment. Government data is not used for learning about your data or to train the OpenAI model.”

Those users requiring “limited access” with higher levels of data security, Microsoft says it will allow them to modify data logging and ensure that no aspect of the journey is stored including the prompts and completions. 

The world is currently wrestling with how to regulate artificial intelligence, particularly large models like GPT-4. This is increasingly likely to include a degree of licencing or monitoring of developers of these models like OpenAI. It isn’t clear at the moment what this will mean for government use cases or existing models.

Tech Monitor has reached out to the UK Cabinet Office to understand what rules govern the UK public service use of generative AI models and how it sits within the G-Cloud framework, which allows companies to offer cloud services to government organisations through a digital marketplace.

A government spokesperson said: “Civil servants are encouraged to use emerging technologies that could improve the productivity of government. This must be done in a way that protects against any bias in AI and complies with all data protection and security protocols.”

Read more: Is UK government ready to abandon its approach to AI regulation?