A ground-breaking new report from the IEEE – the world’s largest technical professional organisation – published today sets out eight general principles for the design of ethical AI, or Autonomous and Intelligent Systems (A/IS).
The report aims to lay a framework to tackle concerns about the impact of AI – a term the report’s authors eschew in favour of A/IS – at the design stage.
The 294-page report, “Ethically Aligned Design (EAD1e), A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems” is the result of three years’ work involving thousands of global experts, the IEEE said, with engagement from academia, government, NGOs and industry around the world.
“A/IS can be an enormous force for good in society. But, in order for that to happen, these systems must be designed and utilised in such ways that they respect human rights, holistically increase well-being and empower all people inclusively,” said the chairman of the IEEE Global Initiative, Raja Chatila.
The Sorbonne Université professor added: “EAD1e is urgently needed to help policymakers, engineers, designers, developers and corporations to ensure that A/IS align with explicitly formulated human values.”
AI Has to Help Humanity Flourish
The report tackles eight key areas of A/IS design, applying classical ethics methodologies to considerations of algorithmic design, referring to Kant, African humanist philosophy and Shinto Buddhism along the way.
The report says: “We need to have an open and honest debate around our explicit or implicit values, including around so-called ‘Artificial Intelligence’ and the institutions, symbols, and representations it generates.”
“Ultimately, our goal should be eudaimonia, a practice elucidated by Aristotle that defines human well-being, both at the individual and collective level, as the highest virtue for a society.”
“Translated roughly as ‘flourishing’, the benefits of eudaimonia begin with conscious contemplation, where ethical considerations help us define how we wish to live” the report emphasises.
Ethical AI: Beware Anthropomorphism?
Warning of an “uncritically applied anthropomorphic approach toward A/IS” the IEEE warns that the approach “erroneously blurs the distinction between moral agents and moral patients” and suggests that ethics needs to be taught essentially as part of an engineering process.
“The aim is to produce what is referred to in the computer programming lexicon as a macro. A macro is code that takes other code as its input(s) and produces unique outputs…”
Principle 1: Human Rights Matter
The use of A/IS should not “infringe upon human rights, freedoms, dignity, and privacy” and ensure traceability. Policy makers need “a way to translate existing and forthcoming legal obligations into informed policy and technical considerations.”
Principle 2: Well-Being is More than “Woo”
A/IS should prioritise human well-being as an outcome in all system designs, using the best available and widely accepted well-being metrics as their reference point
Principle 3: Build AI to Sopport Data Agency
Enable individuals to own and fully control autonomous and intelligent (as in capable of learning) technology that can evaluate data use requests by external parties and service providers. This technology would provide a form of “digital sovereignty.
Principle 4: Effectiveness
Metrics or benchmarks that will serve as valid and meaningful gauges of the effectiveness of the system in meeting its objectives, adhering to standards and remaining within risk tolerances should be agreed in the design stage.
Principle 5: Design A/IS to be Transparent
“Develop new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined… “
The mechanisms by which transparency is provided will vary significantly, the IEEE notes, suggesting the following examples:
For users of care or domestic robots, a “whydid-you-do-that button” which, when pressed, causes the robot to explain the action it just took.
For validation or certification agencies, the algorithms underlying the A/IS and how they have been verified.
For accident investigators, secure storage of sensor and internal state data comparable to a flight data recorder or black box. (IEEE P7001, IEEE Standard for Transparency of Autonomous Systems is one such standard, developed in response to this recommendation)
Principle 6: Accountability
“A/IS shall be created and operated to provide an unambiguous rationale for decisions made” the paper urges. Responsibility, culpability, liability, and accountability for A/IS, should be agreed, where possible, prior to development and deployment it adds.
Recommendation 7: Awareness of Misuse
Both systems themselves and society at large should be programmed/prepared to identify and guard against potential misuse of A/IS.
Recommendation 8: Support Human Competence
Operators of A/IS… will not necessarily know the “sources, scale, accuracy, and uncertainty that are implicit in applications of A/IS” the report notes.
“Standards for the operators are essential. Operators should be able to understand how A/IS reach their decisions, the information and logic on which the A/IS rely, and the effects of those decisions,” the IEEE notes.
John C. Havens, executive director of The IEEE Global Initiative has high hopes for the report. He said today: “EAD1e is a catalyst to inspire its global readership to take action; it is a unique and groundbreaking achievement taking ethical implementation of A/IS from principles to practice.”
Whether either industry or policymakers will pay the blindest bit of attention remains to be seen. As a significant step toward developing a consensus standard for ethical design of A/IS or AI, it is a welcome step however.
This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.