View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
February 16, 2024updated 20 Feb 2024 7:30am

BCS calls for public register of AI practitioners

A new report from BCS, The Chartered Institute for IT recommends new ethical standards for those IT professionals implementing AI systems.

By Greg Noone

BCS, The Chartered Institute for IT has called for a new ethical licensing structure for AI practitioners. In a new report on AI deployments, the organisation recommended that every technologist working in a “high-stakes IT role,” especially one that involves the deployment of new AI systems, should be signed up to an ethical code of conduct. The BCS also called for UK organisations to publish policies on the ethical use of AI for the benefit of both staff and the wider public – rules that should apply equally to senior leaders and IT experts.

“We have a register of doctors who can be struck off,” said the institute’s chief executive, Rashik Parmar. “AI professionals already have a big role in our life chances, so why shouldn’t they be licensed and registered too? CEOs and leadership teams who are often non-technical but still making big decisions about tech, also need to be held accountable for using AI ethically. If this isn’t happening, the technologists need to have confidence in the whistleblowing channels available within organisations to call them out.”

An AI-generated image of a frustrated IT professional, used to illustrate a story about a new BCS report calling for a new public register for AI professionals.
A survey by the BCS has found that many IT professionals feel unsupported by their organisations when they flag ethical dilemmas that arise during AI deployments. (Photo by Shutterstock)

BCS AI report shows concerns about AI ethics within UK businesses

These core recommendations were issued by the BCS in response to findings from its 2023 ethics survey. The poll, answered by some 1,304 IT professionals, found that 82% of respondents believed that UK organisations should publish policies on the ethical use of AI. Almost a quarter thought the healthcare sector should lead in this area, while another 19% of respondents said that they had encountered ethical challenges during the deployment of a high-stakes technology system over the past year. 

Among that latter group, 41% said they received no support from their organisation in resolving the ethical dilemma they encountered. Just over a third, meanwhile, had received ‘informal support’ in the form of conversations with their line manager or colleagues. “I was supported in discussing the potential concern with our customers,” said one anonymous IT manager quoted in the survey. “Our customers and my employer agreed to put in place controls to ensure the potential ethical situation was appropriately managed.”

Other organisations did not wish to hear anything at all about AI-related ethical quandaries their IT teams encountered, with 13% of respondents stating that they were threatened with disciplinary action or dismissal if they reported such concerns. “They reprimanded me for raising the issue,” one IT manager told the BCS, “and threatened my job if I did it again.”

Calls for UK government to lead on creation of ethical code of conduct

The survey also showed strong support for the creation of an ethical standards system for AI practitioners. Over half considered it very important for IT professionals to be able to demonstrate their adherence to such a framework through some form of accreditation system, compared to just 2% of respondents who considered such a proposal “not important at all.” 

Whether or not respondents believed the UK government should take the lead in establishing such a framework went unanswered by the survey, which only asked IT professionals whether or not it should take a “global” lead on shaping ethical standards for AI deployments. However, Parmar argued that the establishment of such a framework would help bolster the UK’s case for shaping norms around AI usage globally. 

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

“By setting high standards, the UK can lead the way in responsible computing, and be an example for the world,” he said. “Many people are wrongly convinced that AI will turn out like The Terminator rather than being a trusted guide and friend – so we need to build public confidence in its incredible potential.”

In response to requests for comment, the Department for Science, Innovation and Technology replied that it is critical that “necessary guardrails” are imposed to ensure AI is harnessed safely and responsibly.

“The Post Office Horizon scandal was an appalling miscarriage of justice, and the ongoing inquiry is rightly investigating what went wrong so those affected can get the swift justice they deserve. “We’re investing over £100m to support AI regulation, research, and innovation and have established a central AI risk function within government to ensure we can respond to the risks and opportunities of the technology,” the spokesperson continued. “This complements the work of the UK’s expert regulators – many of whom are already taking action to understand and address AI risks in their domain.”

Read more: Regulators to define risks, rewards for UK AI

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.