Guidance for public sector organisations looking to use foundation AI models has been published by the Ada Lovelace Institute. The guidelines have emerged days after it was reported that the the civil service is using AI chatbots to replace humans. A union executive says that AI deployment within the public sector must be appropriately regulated to avoid discriminating against staff.
The Ada Lovelace Institute policy document, ‘Foundation models in the public sector’, looks at the key consideration of deploying AI systems that are designed for a wide range of possible applications. It says that the tasks that can be conducted by these systems include translation, generating rough first drafts of a report from a set of notes or responding to a query from a member of the public with text and images.
The institute’s mission is to ensure that data and AI work for people and society, and it has been vocal about the government’s approach to AI and ethics. As reported by Tech Monitor, it recently raised its concerns that Rishi Sunak’s government was unnecessarily narrowing the scope of the upcoming AI Safety Summit to focus on future or frontier models, ignoring the threat potentially posed by the current generation of artificial intelligence systems.
The release of the briefing document comes as the BBC reported that AI-powered chatbots were being used to analyse lengthy reports, a job which would ordinarily be done by a civil servant. The Department for Education apparently ran a trial to boost productivity and it is hoped that the technology could be used across Whitehall.
However, the Public and Commerical Services Union (PCS) has told Tech Monitor that while there is “no objection in principle” to the rise of AI in the civil service, there needs to be regulation. The union has confirmed that the new guidelines would not be enough.
The use of AI foundation models in government is inconsistent
In its guidance on how foundation models should be used in the public sector, the Ada Lovelace Institute says that the current use is inconsistent across government offices.
“There is evidence of foundation model applications, such as ChatGPT, being used on an informal basis by individual civil servants and local authority staff,” the document says. “Authorised use of foundation models in the public sector is currently limited to demos, prototypes and proofs of concept.”
However, the organisation says that there is “some optimism” in the public sector about the potential for the models to enhance public services to help with budgetary restraints and the growing needs of the users. The proposed use cases for AI systems include automating the review of complex contracts and case files, catching mistakes and biases in policy drafts, real-time assistance through the use of chatbots, and knowledge management.
AI foundation models come with risks, says guidelines
However, the guidelines warn against the associated risks that come with using AI foundation models. It advises public sector teams to improve their governance of the systems.
“Risks associated with foundation models include biases, privacy breaches, misinformation, security threats, overreliance, workforce harms and unequal access,” the Ada Lovelace Institute’s document says. “Public-sector organisations need to consider these risks when developing their own foundation models, and should require information about them when procuring and implementing external foundation models.”
The organisation also warns against a “over-reliance” on private sector providers, due to risks with alignment between the applications developed for the private and public sector.
“Effective use of foundation models will also require organisations to consider alternatives and counterfactuals: “This means comparing proposed use cases with more mature and tested alternatives that might be more effective or provide better value for money,” the document says. These should be guided by the Nolan Principles of Public Life, which were established to guide the behaviour of those in public office.
Guidelines for AI in the public sector aren’t enough says union
The Ada Lovelace Institute says that there are steps public sector organisations can take when implementing governance for AI systems. They say that guidance needs to be regularly reviewed and updated to keep pace with AI and other technologies and setting procurement requirements to ensure that the models developed by private companies uphold public standards.
They also advise on piloting new use cases before wider rollout so risks and challenges can be denied and that data be held locally. Employees need to have training in foundation models as well.
But PCS, a workers union for the public sector, warns that guidance isn’t enough and that legislation and collective agreements need to be in place if AI is to be used.
“We have no objection in principle to AI being introduced into the civil service with agreement from trade unions,” Mark Serwotka, PCS union general secretary told Tech Monitor. “But if it is unregulated it could lead to greater discrimination and exploitation of staff. We need legal regulation of AI so the benefits are shared by workers.”
In a recent parliamentary committee, the former HR chief for the Civil Service warned that trade unions and ministers would need to have discussions on the impact of AI replacing workers in the public sector.
Rupert McNeil, who was the Civil Service’s chief human resources officer (CHRO) for six years, told the public administration and constitutional affairs committee: “I think that’s the way some very tricky issues will need to be faced over the next two decades will be best addressed.
“Something I certainly said when I left [was to] start having these conversations now about the inevitable workforce reductions that will be necessary because of AI.”