A year ago today, OpenAI released research detailing GPT-3, the world’s largest natural language processing (NLP) AI model to date. Twelve months on, the technology has broken out of the lab and into the real world thanks to a growing community of users, and could be ready to usher in a whole host of use cases for NLP in business. But question marks remain as to whether upcoming AI regulation will hinder its usefulness.

GPT-3, or Generative Pre-trained Transformer 3 to give it its full title, is a deep learning AI system which OpenAI trained by feeding it information from millions of websites. When asked a question or given specific parameters, it can produce detailed text responses of a quality comparable to a human writer. Its ability to write poetry and fiction has captured the attention of the public, but for businesses, it could have more meaningful uses.

The most high-profile deployment of the technology so far was unveiled this week, when Microsoft, which licensed GPT-3 last September, revealed it is using it as part of its Power FX low-code developer platform. The new feature will allow developers to specify the type of function they want to build in normal language, with GPT-3 searching and finding relevant Power FX code they can use.

GPT-3 for businesses: more use cases are emerging

This will be a time-saver for experienced coders, and lower the barriers to entry for new users, said Charles Lamanna, corporate vice president for Microsoft’s low code application platform. “Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code,” Lamanna said. Such systems may just be the tip of the iceberg for NLP.

“The quality of the response text you got really skyrocketed between GPT-2 [the previous iteration of the system] and GPT-3,” says Wael Elrifai, an AI expert who was one of the beta users of GPT-3. “GPT-1 was the game-changer, and GPT-3 is an evolution of that. There’s been a huge amount of improvement.”

Indeed, what makes GPT-3 stand out from its predecessors is not the underlying technology but the sheer number of parameters it is based on. It was trained on 175bn parameters, compared to GPT-2’s 1.5bn. The second-largest NLP model in the world, Microsoft’s Turing NLG, was trained on 17bn. This increase means the number of potential outputs is also exponentially higher, giving a better quality of response. “Structurally GPT-3 is very similar to GPT-2, it really represents a breakthrough in performance,” says Ray Siems, CEO of AI consultancy Catalyst AI. “It’s taking an existing method and pushing it to new heights.”

Siems says the quality of its output means that GPT-3 could be a watershed for NLP. “A lot of the use cases for NLP have already existed for a while, but some of them weren’t necessarily commercially viable previously,” he says. “Now with GPT-3 they’re much more viable.”

Indeed, the last year has seen several start-ups based entirely around GPT-3, such as Flowrite, which claims it can automatically generate emails and messages in the user's personal writing style, and Viable, which is designed to help businesses ask questions of customer data and gain useful insights.

Overall, more than 300 apps have been developed so far running on GPT-3, OpenAI says, with the system generating a hefty 4.5bn words a day. More use cases are likely to develop using GPT-3 as the basis for productivity tools, Elrifai says. "It's good for text completion and question and answer, and it's good for search, it can do context-aware search," he says. "That leads to a series of uses, the obvious one being chatbots. It's not a particularly sexy use case, but every interaction we have, be it by phone or email, could potentially be managed by a chatbot. Most of the GPT-3 projects at the moment relate to this."

GPT-3 and Microsoft

Though its name may suggest otherwise, access to OpenAI's work is somewhat restricted. GPT-3 can be used by anyone via an API, but the only organisation apart from OpenAI that can see the code running under the hood is Microsoft, which purchased an exclusive license to the technology last September. The two companies have long had a close relationship, with OpenAI using Microsoft's Azure cloud platform for the compute power needed to train GPT-3 and its other models, and MSFT investing some $1bn in the San Francisco-based company to fund its research.

Microsoft's access to OpenAI's underlying code could help give it the edge over rivals such as Google and Amazon when it comes to providing AI-based tools for users of Azure, says Nick McQuire, chief of enterprise research at CCS Insight. Applying GPT-3 to Power FX is the first step in this process.

"By bringing together GPT-3 and Power FX, we are not only seeing the first phases of NLP at scale becoming more widely available, but Microsoft is also being much more aggressive in infusing some of its most advanced AI into key products like Power Platform to make life much easier for developers," McQuire said in an emailed statement. "NLP is arguably the hottest area of competition in AI at the moment and Microsoft’s steps here indicate that its partnership with Open AI is starting to pay off in terms of widening access and accelerating the speed of development."

 

NLP is arguably the hottest area of competition in AI at the moment and Microsoft’s steps here indicate that its partnership with Open AI is starting to pay off.
Nick McQuire, CCS Insight

For OpenAI, Microsoft's increasing involvement was born out of necessity, Elrifai believes. "I think OpenAI is still trying to figure out what it can do with this technology beyond chatbots without adding a crazy amount of risk into its business," he says. "A lot of people took the negative view that Microsoft stepped in only when they saw how valuable this technology was going to be, but I kind of think Microsoft took over because they saw the potential for unlimited liability. A 20 or 30 person shop like OpenAI can't operate at that level."

But Microsoft's control of the code could present a problem for businesses wishing to deploy GPT-3 in their organisations, says Adam Leon Smith, director of ForHumanity, a non-profit organisation that investigates the risks around AI use. "If you want to take AI seriously you really want control of your own IP," he says. "Things like GPT-3, which provide AI as a service via an API, are good for playing around with, but if you want to launch a business they are of limited suitability because you're not in full control [of the technology]."

Risks and limitations of GPT-3

Companies wishing to build tools based on GPT-3 have to operate under some strict restrictions. "Everything you put out there using GPT-3 has to be vetted by OpenAI," Elrifai says. "Anything political or that involves generating social media output is flagged as a no-go, as well as things that relate to credit checks and even stuff around search engine optimisation. They use the term 'high stakes' and anything like that is disallowed."

These limitations are well justified. OpenAI famously decided against making the full code behind GPT-3's predecessor, GPT-2, available fearing its text-generating capabilities could be used to create fake news and hate speech, though it later relented and published it anyway stating that they hadn't witnessed any significant misuse. But a research paper on GPT-3 released in January showed the model demonstrates an alarming anti-Muslim bias, making associations between Muslims and violence.

"While these associations are learned during pretraining, they do not seem to be memorised," the paper states. "Rather, GPT-3 manifests the underlying biases quite creatively, demonstrating the powerful ability of language models to mutate biases in different ways, which may make the biases more difficult to detect and mitigate."

The existence of such bias is part of the reason the European Union plans to introduce strict regulation around AI, Leon Smith says. When implemented, this could seriously hinder the ability of European businesses to use AI-as-a-service systems like GPT-3. "The draft regulations identify high-risk use cases, such as the use of AI in employment matters or credit decisions from banks where bias could creep in," he explains. "But their definition of AI is incredibly broad, it pretty much covers any statistical technique."

So-called high-risk systems would need to be registered with the authorities under the regulations, while the companies using them could incur significant extra liabilities. "If you deploy GPT-3, you'll probably deploy it under your own product name," says Leon Smith. "At that point, you would become responsible for everything under these regulations, and OpenAI's liability would be extremely limited. You will be obliged to undertake significant technical quality and risk management work, and be audited regularly. It's going to create a huge amount of regulatory risk for users and I think that will create a threat to this kind of [AI as a service] business model."

Beyond GPT-3? What does OpenAI have in store next?

Further legislation around AI is likely to develop in other markets too, but with the EU AI bill still a year or two away from being implemented, GPT-3 can continue to be deployed in businesses. OpenAI has yet to reveal any details about its next project, but Catalyst AI's Siems says he would not expect to see another exponential increase in the size of model if GPT-4 is released in the near future.

"It remains to be seen how many more gains we can get out of the current systems or to what extent we need new fundamental breakthroughs to get to a greater level of commercial viability [for NLP]," he says. "With what's available now I think there will be a few years of new and innovative products which are useful at a commercial level. Some of the GPT-3 start-ups which are emerging at the moment will come to maturity and that will filter through to the wider market, and you'll see the potential for productivity gains across a lot of different sectors."