A project that aims to reform patent law so that it recognises AI systems as inventors scored a victory this week, after South Africa approved the first ever patent for AI-created inventions. The Artificial Inventor Project argues that this removes an unfair disincentive for AI-powered invention – but critics worry that, should other jurisdictions follow suit, it could open the door to a new wave of intellectual property abuse.
Why South Africa approved an AI-invented patent
Stephen Thaler is an inventor who has developed an AI system, named DABUS, that uses neural networks to devise original inventions. Until now, however, applying for a patent required a human being to be listed as the inventor. This is an unfair disincentive to AI-powered invention, says Ryan Abbott, a professor at the University of Surrey’s School of Law, and who represents Thaler legally. “If a whole category of innovation that’s made by AI isn’t patentable, it means there’s a problem with patent law,” he argues.
If a whole category of innovation that’s made by AI isn’t patentable, it means there’s a problem with patent law.
Ryan Abbott, Artificial Inventor Project
To contest this requirement, Thaler and his collaborators on the Artificial Inventor Project submitted a patent application with an AI listed as its inventor in 17 jurisdictions. The patent – which describes an invention for beverage packaging, and one for an emergency light – was rejected by the US, UK and European intellectual property offices on the grounds that the inventor is not human.
“Every jurisdiction has accepted the claim that the AI acts as the inventor,” says Abbott. “But [the EU, US and UK] are saying that because [DABUS] cannot legally be the inventor, and because you need a legal inventor on a patent, [we] cannot get a patent.” The Artificial Inventor Project has appealed the UK IPO’s decision, with a ruling expected in October.
This week, however, South Africa’s Companies and Intellectual Property Commission approved the project’s application. Unlike some other jurisdictions, South Africa accepts the inventor designated on applications filed through the UN’s World Intellectual Property Organisation, which in this case identifies DABUS as the inventor. Thaler, who is the owner of the patent, could now seek legal redress against anyone selling the inventions in South Africa.
The significance of the decision is debatable. Abbott believes that it will put pressure on other jurisdictions to follow suit. “[It] creates a bit of a challenge to a globally harmonious IP system, where a patent like this might be protected in South Africa but not [elsewhere],” he says. “And it’s a signal to industry, as jurisdictions like the US, the UK, Europe and China are fighting to have an AI-friendly industrial policy, [that they’re] not granting protection.”overturned that country’s original rejection of the DABUS patent. A judge ruled that the decision “is consistent with promoting innovation.”]
But Imogen Ireland, an IP lawyer at global firm Hogan Lovells, says the legal significance of the South Africa approval would only become clear if the patent were contested in a court of law. “It’s very much ‘watch this space’, as opposed to a definitive departure from the law as we see it in England and Wales,” she says. “[It’s] perhaps not the time to get excited just yet.”
More significant for UK businesses is the government’s response to a public consultation on intellectual property law in light of AI, which was published in March, Ireland says. “The UK IPO pointed out that they are prepared to consult on a range of options to deal with this inventorship issue, including legislation to address the fact that AI tools cannot be listed as inventors,” she says. “And they went on to acknowledge that AI systems do have an increasing impact on the innovation process.”
This open-mindedness means that, even if the Artificial Inventor Project loses its appeal on the basis of the UK’s current laws, those laws may change in future, says Abbott. When it ruled against the DABUS application, the UK court’s “position was that we should allow parliament to conduct a consultation to decide whether and in what ways the law should change,” he says.
The dangers of AI inventorship
Not everyone is in favour of granting ‘inventorship’ status to AI systems, however. Christopher Markou, who studies the intersection of AI and law at the University of Cambridge, argues that there are tangible dangers to human inventors and their rights. “Maybe the patent regime is outdated and needs to be changed,” he says. “But the solution isn’t to say… ‘let’s just open up the floodgates so we can make it more convenient for the technical elite that can do this stuff’.”
Maybe the patent regime is outdated and needs to be changed. But the solution isn’t to say… ‘let’s just open up the floodgates’.
Christopher Markou, University of Cambridge
First, Markou says, an AI system could generate thousands of speculative inventions. If these could be patented, the patent owner could then sue anyone who accidentally infringes upon one of them. “What you get here is [an extension of] one of the worst problems of the existing IP regime: patent trolling, people hustling and trying stuff on speculatively to see what sticks.” (Abbott argues that there are already mechanisms to protect against patent trolling, and that the benefits of an increased number of useful inventions would outweigh this risk.)
Secondly, AI systems such as OpenAI’s GPT-3 have become highly adept at creating new output based on existing text, music or other forms of data. If AI systems are awarded IP rights over their output (of which ‘inventorship’ would be one example), then these artificial creations could be monetised without rewarding the original creator, Markou argues.
Ireland adds that decisions about the legal status of AI may have unintended consequences. If an AI can be the inventor of a patent, “What does that mean for other instances where AI is the active agent in, for example, an infringement or liability situation?”
This is why any significant change to the legal status of AI must be considered carefully, says Ireland. “Whatever we decide in the [IP] context, we have to be very clear about the limitations of that decision to make sure it doesn’t confuse the picture in other scenarios.”