View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Hardware
May 6, 2016

We must confront the ethical implications of the powerful algorithms we are creating right now

Amido technical consultant Richard Slater looks at the fact that many are too busy to consider the micro and macro ethical considerations of the rise of algorithms.

By Cbr Rolling Blog

AlphaGo, Google’s program which plays the ancient game of Go, beat Lee Sedol, one of the best human players, in the first two of five games scheduled in Seoul this week.

Go is interesting to computer scientists because of its complexity. Famously, there are more possible board positions than there are particles in the universe.

As a result, a Go-playing system can’t rely on computational power, but instead uses "deep learning" technology, modelled on the human brain.

Its success shows that, as the effect of Moore’s Law continues to slow, huge performance gains can now be achieved through algorithms.

But with great power comes great responsibility. It’s my contention that we must confront without delay the ethical implications of the powerful algorithms we are creating – or suffer some profoundly challenging consequences.

Don’t get me wrong. I’m not saying algorithms are negative. Of course not. As I write this, economists, physicists, and other scientists are working up algorithms to empower people’s lives in positive ways.

In China, the Central People’s Government has issued licences to a variety of businesses who are using browsing history, shopping behaviour and social relationships to help Chinese citizens build a meaningful ‘social credit’ score.

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

This non-traditional way of assessing the probability of someone defaulting on a loan will help strengthen a growing consumer finance economy.

Kreditech, a Hamburg-based FinTech startup is offering highly favourable loan rates based on borrowers’ behaviour on Kreditech’s website leading up to submitting a loan application. This data is then used to augment the traditional credit rating obtained through big credit agencies to give a truer picture of the risk of loaning to that person.

In the home, the so-called Internet of Things gives us the power to connect our front door to our house lights, and our calendar to the satellite navigation. Smarter homes will result in smarter energy consumption and increased personal safety.

But not all examples of the power of algorithms are so apparently clear-cut.

As you know, in California, the state legislature has been amended to allow vehicles to be driven autonomously, albeit with a competent and lawful driver behind the wheel. As a result, car manufacturers such as Tesla have been installing sensors that enable fully autonomous driving.

These software algorithms are taking decisions away from humans. So what happens when these decisions are a matter of life or death?

Say for example, a car observes a child running out into the road and pre-emptively applies the breaks to avoid an accident — there is no question as to what the car should do in this situation.

But what if only by swerving out of the path of the child can the computer save his or her life? What if swerving would also result in death or serious injury to the driver of the car?

It will be the algorithms within the car that have to decide if the child or the driver survives. (So much for free will.) Now, it could be argued that most people would choose the survival of the child over the driver. However, if this was pre-programmed into the car, would vehicle sales suffer as a result?

A similar thought experiment was first tackled in 1967 by Philippa Foot. Her famous Trolley Theory showed there was no clear answer to this thorny moral dilemma.

So where does that leave us software developers? Should an algorithm imitate our human irrationality or should it simply weigh up the known quantities and summarily decide who survives and who dies?

Some dismiss this dilemma, arguing it is a false premise. There are too many "unknowns" for an algorithm to assess which outcome will occur for certain, they say.

But that’s a cop-out, in line with Google’s shelving of their oft-stated commitment to "not be evil". They are missing the big picture.

The truth is, algorithms affect too many people for us to ignore their relationship to ethics.

Alogrithms take data and learn from it to predict outcomes – whether it’s what will most likely happen in a car accident, how much a keyword is going to be worth to your online business, or what film you might want to watch next on Netflix.

By making predictions about human behaviour they try to remove uncertainty. Many believe that makes them very dangerous indeed.

The reason is that uncertainty is at the very heart of society, family, law, contracts, even individual will. The unknowable is why we all came together in the first place.

So if you attempt to remove uncertainty – and let’s be honest, try to profit from it – it’s inevitable you will rub up against ethical considerations. Stuff about free will, competition, the nature of capitalism itself. And frankly, it’s naïve and possibly dangerous to believe otherwise.

I believe we in the algorithm business must consider these ethical questions right now, or those in power – and the people they represent – certainly will later. And we might not like the answers they come up with.

But make no mistake, these questions are of the most profound importance to us all.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU