View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
January 31, 2022updated 07 Jun 2022 8:56am

AI ethics in action: How Merck created its own code of digital ethics

Drawing on its experience in bioethics, the German research giant is among the first to design and implement its own ethical charter for data and AI.

By Pete Swabey

The advent of AI has been accompanied by a growing awareness of digital technology’s potential to cause harm, whether through discrimination or violations of privacy. Policymakers are still debating how to address these harms but, in the absence of laws, some institutions have taken an ethics approach, defining principles of ethical behaviour and committing to uphold them – with varying degrees of credibility.

Merck Group, a German science and technology company best known for its work in pharmaceuticals, is among the few organisations to have developed and implemented their own digital code of AI ethics. Drawing on its experience in bioethics, Merck has defined a set of ethical principles to guide digital innovation, appointed a digital ethics advisory panel, and is now putting the code into practice. It is an approach to technology governance that others may choose to follow, even as regulation bites.

merck ai ethics


Merck set up a bioethics advisory panel to address ethical questions around stem cell research and now the company is tackling AI and digital ethics. (Photo by © Merck KGaA, Darmstadt, Germany)

Developing a code of AI ethics at Merck

Back in 2011, Merck established a Bioethics Advisory Panel to help answer ethical questions on its use of stem cells. This panel draws on established bioethics principles, such as Beauchamp and Childress’s Principles of Biomedical Ethics, to guide Merck’s innovation in areas that present ethical risks, such as genome editing.

An ethics-based approach allows companies to reduce the risk of harm when regulation has yet to catch up with technology, ethicists at Merck recently wrote. “Moreover, many ethical ‘should’ questions go beyond the scope usually provided in legal regulations, which provide practitioners mainly with answers to ‘could’ questions.”

Ethical questions have since emerged around the use of data and AI in medical research, which often incorporates highly sensitive patient data. For Merck, these questions came to a head in 2019, when it began developing digital health solutions, including Syntropy, a cancer research joint venture with controversial US data analytics provider Palantir. “We thought, ‘Maybe we need to get an ethical framework for this new kind of business model and collaboration’,” recalls Jean-Enno Charton, Merck’s director of digital ethics and bioethics.

At first, the company consulted its bioethics panel. Their response was two-fold: firstly, that digital ethics requires specialist expertise. And second, that Merck’s priority should be to foster trust among patients and other external stakeholders. “You need to combat the mistrust that has accumulated” around digital technology, Charton explains.

Merck struggled to find digital ethicists, however. The company appointed a Digital Ethics Advisory Panel, consisting of experts in technology, regulation and other relevant fields, but decided an ethical framework was needed to guide the panel’s work. “The panel needs some idea of what is ethical or not,” Charton says.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

To do this, Charton and his team analysed many of the ethical AI frameworks that have been developed by regulators, trade bodies and other institutions, settling on 42 that they considered relevant to Merck. Interestingly, Merck focused only on frameworks originating in Europe – in part to save time, Charton says, but also because “European discussion on the ethics of data and AI is much more advanced”.

Transparency is the only principle that was mentioned in all of the digital ethics frameworks we analysed.
Jean-Enno Charton, Merck’s director of digital ethics and bioethics

From these frameworks, Merck extracted five core ethical principles for digital innovation. Four of these – justice, autonomy, beneficence (doing good), and non-maleficence (not doing harm) – correspond to the core principles of bioethics. The fifth – transparency – has particular salience in the context of digital ethics, Charton explains. “Transparency is the only principle that was mentioned in all of the digital ethics frameworks we analysed, so it’s clearly a major issue in its own right,” he says. “It’s all about creating trust, which is the biggest issue in digital ethics.”

These core principles, and their component values, were then translated into a series of guidelines. These are designed to be applicable in any context, and to be understood by all– examples include “We stand up for justice in our digital offerings” and “We assign clear responsibilities for our digital offerings”. These guidelines form Merck’s Code of Digital Ethics (CoDE).

Putting AI ethics into action: implementing the Merck CoDE

An ethics code is worth nothing if it is not upheld and acted upon within an organisation, however.

Charton started the rollout of CoDE in 2020 by getting senior decision-makers in data and digital functions on board. “The first thing I had to do was convince all our stakeholders that this is something they need,” he recalls. “Some people contested it; some of them needed convincing that this wasn’t going to be an innovation blocker. We explained how bioethics has helped the company, by setting a framework for innovation that helps you think about the consequences of your work.”

Next, Charton took the CoDE to Merck’s board of directors. “I knew there was no way around getting this approved by our board directors,” he says. But with senior stakeholders already on side, when directors asked their teams, it won their approval.

As a result, the CoDE became one of only four ‘charters’, the highest status of document at Merck. This means it applies to all employees, and that it can be discussed publicly to promote transparency and accountability.

Charton has established a process for employees to flag projects that may present ethical risks. Once flagged, a project will undergo a ‘principles at risk’ exercise – a checklist of questions that examine whether it risks breaching the CoDE. If so, it is reviewed by the Digital Ethics Advisory Panel, which will provide guidance to the business owner of the project.

Undergoing this process is not mandatory for all projects, lest it become a box-ticking exercise. “If we were to make it a mandatory part of the development process, we would lose people,” Charton says. This means the process depends on the awareness and engagement of stakeholders. “I make sure everyone knows who I am and that they can talk to me about ethical concerns.”

Nor is the Digital Ethics Panel’s advice binding. “A business leader can ignore it, but they would have a very hard time justifying their decision to do so,” Charton explains. If a project is found to have breached the CoDE retrospectively, “there will be an internal learning process to make sure it never happens again”.

Now, Charton is developing basic training on the CoDE for all employees, and a dedicated course for teams working with data and algorithms. He is also exploring the possibility of automating ethics risk assessment. He has developed ‘Digital Ethics Checkpoints’ that be can be applied to new products in development, and is now examining how they might be integrated with Palantir’s Foundry data platform, which Merck uses for analytics, so that ethical risks can be flagged automatically and proactively.

Next steps in putting AI ethics into practice

AI ethics frameworks backed by governments or industry bodies have become widespread, says David Barnard-Wills, senior research manager at Trilateral Research, a consultancy focused on technology’s social impact. A few years ago, one of the company’s projects identified 70 such frameworks; more recent studies have found hundreds.

Ethics codes for individual organisations are less common. The existing frameworks tend to cluster around the same core principles, Barnard-Wills explains, so developing these for a single organisation might be wasted effort. But “I can see why you would want to develop [a code] yourself because you can make it very specific to your business and the issues it faces,” he says.

Senior management buy-in is essential for the success of any internal ethics initiative, Barnard-Wills explains. “If you don’t have a leadership culture that champions [the code] and makes difficult decisions with reference to it, or if it’s constantly overridden in the pursuit of profit, then the code is meaningless.”

You can think of a code of ethics like any business change process… there have to be organisational roles and responsibilities.
David Barnard-Wills, Trilateral Research

“You can think of a code of ethics like any business change process,” he adds. “It’s not just a statement or a vision, there have to be organisational roles and responsibilities.”

While Merck has so far stopped short of making ethics risk assessments mandatory for all projects, Trilateral Research is examining ways to embed ethics in software development. These include appointing ethicists to development teams, and including ethical considerations in the requirements capture phase.

A future step could be an “ethical design repository”, says Barnard-Wills. “Design choices at a product or feature level can make a huge [ethical] difference, right down to how some text is presented to a user,” he says. The way a privacy policy is presented, for example, can determine whether an individual can be said to have truly consented to their data being used in a certain way. A repository of worked examples could help developers embed ethical design quickly and easily, Barnard-Wills suggests.

If the current wave of AI ethics frameworks reflect a ‘regulatory gap’, will they be redundant when regulation such as the EU’s AI Act comes into action? Barnard-Wills thinks not. “Ethical commitments will always be important,” he says, “because the law will never cover every edge case.”

For Merck, the development of the CoDE is just the beginning of its digital ethics initiative. One reason for making the document public, he says, is so it can be debated openly. “We want people to read it and talk about it, and if they point out something that’s wrong, we can update it,” he explains. “This is not set in stone – ethics is always following progress.”

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU