When it comes to doing things right first time around the tech industry isn’t always the greatest. Accidentally doing something bad and quickly backtracking has become somewhat of an industry standard.
Often these mishaps have no long lasting affect other than a bit of egg on the face, but when it comes to potentially more dangerous technology, such as Artificial Intelligence, there really should be a greater sense of care.
Google DeepMind recently landed itself in hot water after was provided with 1.6 million patient details by the Royal Free NHS Foundation Trust, a breach of the UK’s Data Protection Act.
In essence, the motives of DeepMind are honourable, as they typically are for most, but the execution is lacking in foresight. The company is far from alone on this front, just see the Microsoft Tay bot fiasco.
Now it would appear that the Google company has learned its lesson and has now created an ethics unit that will look to understand the real-world impacts of AI.
Called the DeepMind Ethics & Society group and comprised of external fellows and DeepMind employees, it will be lead by Sean Legassick, a technology consultant and formerly the UK/EU policy manager for Google, and Verity Harding, a government adviser.
The company said: “At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes.
“Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work.”
Privacy, transparency, governance, morality and values and topics such as the economic impact of AI will all be looked at and published online.
Which is all well and good but shouldn’t this have been thought of and implemented BEFORE any problems occurred?
There’s a quote from Jurassic Park which goes: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Whilst creating dinosaurs and creating AI may be completely different things, both are steeped in science fiction and most of it with seriously negative connotations for the human race.
In 2016 the US tested a drone that had been fitted with AI software. The findings were that it was accurately able to pinpoint a target without human intervention. AI that can lie has been created, it has gone AWOL and so on and so forth.
There is a large voice of experts much smarter than I that are concerned about the rise of AI and what it means for humans, and given the frequent naivety of tech companies to act without thought for the repercussions of their actions, it’s easy to be worried.
Shutting the gate after the horse has bolted is no longer an acceptable plan of attack and unfortunately it leaves many wondering – will AI kill us?