Governments in the UK do not typically have reputations as visionary thought-leaders, facing some of the most challenging political questions of a generation. British politicians are even less likely to be focused on challenging Musk and Hawking for the ‘World’s Leading Futurist’ crown.
So what did we learn when the Chancellor delivered his Autumn Budget, announcing that he wanted to create “the most advanced regulatory framework for driverless cars in the world” and “the government wants to see fully self-driving cars, without a human operator, on UK roads by 2021”?
Prediction is very difficult, especially about the future
We can safely presume that within the next decade, we will see driverless vehicles on public roads, unleashed from their test environments. Uber recently announced plans to buy 24,000 autonomous cars from Volvo, while Google affiliated ‘Waymo’, announced that their fully driverless cars have been driving around Arizona, without a safety driver at the controls, for months. This is industry validation that we’re approaching the event horizon for publicly available driverless vehicles.
The focus is rapidly shifting from validating the capability of the driverless vehicle tech to scrutinising the suitability of existing legislation to deal with this technology. The US and UK have seen plenty of theoretical ‘thought pieces’ on holistic issues raised by driverless vehicles (and artificial intelligence more generally). However, it is only recently that legislators have begun to fully recognise that the topics have evolved from abstract sci-fi debates to practical real-world issues.
So, where are we with the UK regulator’s approach to automated vehicles? Here we mean both fully autonomous vehicles, capable of being operated with little or no input by a driver, as well as automated technologies which support the operation of a vehicle by a driver.
In February 2015, the DfT published ‘A detailed review of regulations for automated vehicle technologies’, together with a ‘Summary report and action plan’, under the heading “The Pathway to Driverless Cars“. These documents set out the UK government’s plan to update laws and regulations to permit the sale of automated vehicles to the public, and included plans to develop a code of practice for testing automated vehicles, while reviewing legislation to clarify liabilities in the event of a collision, and consider whether higher standards of safety are required (including dealing with cyber threats).
Additionally, a draft ‘Vehicle Technology and Aviation Bill’ was announced during the Queen’s Speech in February 2017, which included proposed automated vehicle specific legislation, relating to record keeping, insurance and accidents relating to uninstalled software updates. The Bill passed a second reading in October 2017.
The automated vehicle is often cited as a practical example in a legal debate surrounding artificial intelligence more broadly. Discussion on AI also focusses on issues around ethics and the concept of legal personality. The question was asked by the EU Commission in the January 2017, following a recommendation by their Legal Affairs Committee on whether robots and indeed other AI technology, should be granted ‘personhood’ status.
The law places a strong emphasis on ‘the person’, which drives concepts such as ownership and both civil and criminal liability. That concept initially attached to the human – people owning things, people committing crimes or entering into agreements. But we have seen our laws adapt, and in our modern world, we have stretched the concept of legal personality. We have created intangible entities – limited companies, PLCs, LLPs etc., which are all capable of ownership and liability in their own right.
This means they can enter into contracts, incur debt and be held accountable for their actions, and they are distinct from the identities of their shareholders, directors, parent or subsidiary companies. In 2017, for environmental protection reasons, we have seen the Whanganui River in New Zealand granted legal status and an attempt to do the same for the Ganges in India. In October 2017 a robot called “Sophia” was granted citizenship status by Saudi Arabia – triggering a wave of interesting discussions and repercussions, such as whether Saudi robots have more rights than women.
The law could be amended to give some form of legal status (and so responsibility/accountability) to driving technologies – as we already have a precedent for amending this legal concept. However, a key question is what are we trying to achieve in doing this?
This question forms the other current focus of regulators regarding automated vehicles and artificial intelligence – the underlying ethical principles, which govern the operation of the tools.
Both the UK and EU and approach has been to flag that reaching conclusions on the various ethical debates on AI and robots is fundamentally important. Indeed, in his November budget, the UK Chancellor provided the further investment required to progress ethical think-tanks and their recommendations.
Questions such as “should the driverless vehicle choose the elderly pedestrian or the young family to crash into?” are now being debated in the public domain. Reaching conclusions on these questions, which should involve factoring in both public opinion, and ongoing Government supported research – will allow us to shape the next phase of legislation. Clearly, with this revolutionary technology so close to being publicly available, we cannot wait too long for the legislation to catch-up.
This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.