IT professionals say Prime Minister Rishi Sunak must make AI ethics a priority for the government’s AI safety summit, taking place in November. A new survey of tech professionals also revealed respondents believe more organisations should publish their policies on digital ethics.
The research from the BCS, The Chartered Institute for IT, shows that the majority (88%) of those polled said that the UK needed to take an international lead on ethical standards in AI. More than 1,300 BCS members in the UK completed a questionnaire as part of the research in August ahead of the AI Safety Summit at Bletchley Park.
As reported by Tech Monitor, the summit will be attended by some of the world’s largest AI labs – OpenAI, Google DeepMind and Anthropic – as well as Microsoft and other Big Tech companies. Following concerns raised by industry insiders that the big players would set the agenda, the Department for Science, Innovation and Technology (DSIT) confirmed that it would be speaking with techUK, The Alan Turing Institute and the Royal Society to host a series of talks, debates and events to hear from the wider industry as well as explore health and education use of AI.
AI ethics plays a large part in talent choosing to work for companies
Ahead of the summit, BCS says there is a need for more businesses to publish their ethics policies on their use of AI and other ‘high-stakes’ technologies.
Gillian Arnold, president of BCS, said that hosting the AI Safety Summit will be the UK’s chance to put together a global consensus on the ethical use of digital technologies: “That includes asking organisations to publish ethical policies on how they create and use tech,” she said.
The president also said that there needed to be safe whistle-blowing channels for experts working on AI in instances where they feel they’re being asked to compromise “their professional standards or discriminate against a section of society.”
Tech professionals who took part in the BCS research concur with Arnold, with nearly all of the respondents (90%) saying a key deciding factor in their choosing to work for a company or partner with them is the business’s reputation for ethical use of technology. Respondents also wanted companies to have credentials to demonstrate their ethics (81%) through recognised professional standards.
Organisations should be required to publish ethics policies
In November 2021, Unesco published its Recommendation on the Ethics of Artificial Intelligence. Considered the first-ever global standard on AI ethics, the framework was adopted by all 193 member states.
The document laid out ten core principles in what the organisation called a “human rights approach” to AI. These included ‘proportionality and do no harm’, ‘right to privacy and data protection’, ‘fairness and non-discrimation’ and ‘human oversight and determination’.
However, 19% of the BCS respondents said that they had faced an ethical challenge in the workplace over the past year. The majority of the respondents (82%) agreed that organisations should be required to publish their ethical policies on their use of AI.
“The public needs to have confidence that AI is being created by diverse, ethical teams as it continues to weave itself into our life and work,” said Arnold. “Agreeing global standards of responsible computing is one way of building that trust.”
AI ethics standards need to be implemented across health and social care sectors
The BCS’s membership in the UK who responded to the poll also said they wanted to see ethical AI standards implemented more quickly across the health and care sector. Nearly a quarter (24%) said this, with 16% wanting the priority to lay with defence and 13% both choosing criminal justice and banking. Education was selected by 12% of respondents.
Back in June, Education Secretary Gillian Keegan called for a better understanding of the use of AI in education during London Tech Week. She called for evidence to look at the risks and ethical considerations of AI in the sector as well as training for education workers.
The NHS published its guidance on using AI earlier in the year. It said that patients have most probably already encountered AI in healthcare through experiences with virtual wards, while clinicians would use AI to support with analysing brain scans or x-ray images.