A new alliance of professional and research organisations is aiming to deliver a set of professional standards for data scientists. If widely adopted, the framework could go a long way to ensuring those working on advanced AI and machine learning systems (AI/ML) do so in a way that mitigates the emerging technology’s risk to society. It could eventually lead to anyone unethically implementing AI being ‘struck off’, or banned from the profession, one expert told Tech Monitor.
The Alliance for Data Science Professionals has been formed by organisations including the BCS, the chartered institute for IT, and the Alan Turing Institute for AI research, along with the Royal Statistical Society, the Institute of Mathematics and the National Physical Laboratory. It aims to set the standards “needed to ensure an ethical and well-governed approach so the public, organisations and governments can have confidence in how their data is used”.
The alliance aims to publish the initial standards this autumn. “Data science can be a powerful tool for businesses and governments,” said Stian Westlake, CEO of the Royal Statistical Society. “But just like established fields like engineering or medicine, it needs good standards to ensure it is used wisely and well. The alliance will play an important role in setting standards for those working in data science to help organisations make the most of cutting-edge new approaches, and so that we can all have confidence that our data is in good hands.”
Professional standards for data science: What will the new alliance do?
Data science and AI are high priorities for tech leaders. According to the Tech Monitor Tech Leaders Agenda 2021 research, both data analytics and AI/ML systems rank among the areas where the CIOs and CTOs polled expect investment to grow the fastest.
However, as this rapid investment by businesses continues, confidence that new systems will be implemented in an ethical manner appears to be low. A Pew Research poll of over 600 tech innovators and policymakers found that 68% do not believe ethical principles “focused on the public good” will be embedded in most AI systems by 2030. Providing a framework to hold those who work in the industry to account may be a step towards putting this right.
The new alliance wants to set standards expected “of people who work with data that impacts lives and livelihoods”. These could include data scientists, data engineers, data analysts and data stewards. The standards will be implemented by way of data science certifications issued by the alliance, which will also hold certified people and organisations accountable to ensure standards are met. A single searchable public register of certified data science professionals is also set to be created.
“The alliance’s members are ideally positioned to address ambiguities around data skills definitions, ensure the consistent application of standards across industries, and maintain these standards to accurately represent emerging skills needs,” said Matthew Forshaw, national skills lead at the Alan Turing Institute. “The alliance will play a significant role in establishing and upholding the professional values necessary to ensure ethical, fair and safe professional practices around data and AI.”
The professional viewpoint on standards in data science
The challenge for the alliance will be to gain widespread adoption of its standards across the UK and beyond. Various codes of conduct already exist for data scientists, but none has been widely accepted by the industry.
The framework could be a first step towards regulating those who can work on AI systems, believes Adam Leon Smith, CTO of AI company Dragonfly. He welcomes the new initiative as he believes it is currently “far too easy for an untrained person to upload a data set to a system like Google’s AutoML,” a cloud-based tool that allows users with limited AI skills to build AI models. They can then potentially “implement an AI system that affects people’s rights and freedoms,” he says.
The multidisciplinary nature of the alliance, which involves IT and tech organisations as well as those in mathematics and engineering, improves its chances of success, Smith says. “The skills to mitigate the societal risks of AI such as machine learning bias are not exclusively technical, and it’s great to see such a diverse group of learned societies collaborating on this initiative,” he adds.
The alliance is taking inspiration from the type of regulation applied in medicine, and Leon Smith says those who use AI in an unethical manner could end up receiving harsh sanctions. “As well as setting standards for the knowledge of data professionals, this new scheme will regulate their ethical behaviour,” he argues. “Ultimately, professionals will be able to be ‘struck off’ for malpractice, much like in the medical profession.”
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.