Microsoft-backed AI research company OpenAI is launching a version of its popular natural language AI platform ChatGPT for enterprise users “in the coming months”. The business-friendly version of the popular tool will have greater privacy protections for user data. In an attempt to get ahead of complaints from data regulators, OpenAI has also launched a suite of new privacy protections within ChatGPT including the ability to stop a chat from being used to retrain the model.
Exact details of the business version of ChatGPT, including pricing, additional features or control over guardrails haven’t been revealed. Tech Monitor has asked OpenAI for more information. In terms of data privacy, it is expected that the enterprise version will have similar data governance rules applied to the use of the API in a business environment.
Since its launch in November 2022, ChatGPT has become a vital tool for many working across a range of industries. It is one of the fastest-growing consumer apps in history, reaching hundreds of millions of monthly active users within a few months of launch and causing massive companies to launch new AI tools of their own.
In part due to this rapid success, it has come under intense scrutiny over the way data is handled in both the training and content generation process. Italy’s data protection watchdog ordered OpenAI to stop processing Italian user data until it complies fully with GDPR. This effectively caused the tool to be “blocked” in the country. Other EU countries are considering similar actions and the US is exploring ways to regulate large language model AI tools.
Being found in breach of GDPR legislation would be bad news for OpenAI as they could face significant fines and restrictions on the use of EU user data in training or operating models.
ChatGPT’s incognito mode
To get ahead of this, the company has unveiled a range of new tools within the web app that gives the user more control over data and how it is being used. This includes a new “incognito mode” that turns off chat history for a particular conversation. “Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar,” OpenAI declared.
Disabling chat history is available within the settings option inside ChatGPT and “provides an easier way to manage your data than our existing opt-out process,” the company wrote. This is because “when chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting.”
The large language models behind ChatGPT, including GPT-3.5 and GPT-4 are trained on a large corpus of text including publicly available content from platforms like Wikipedia, licenced content paid for by OpenAI and content created by human reviewers to fill gaps in the training data. “We don’t use data for selling our services, advertising, or building profiles of people – we use data to make our models more helpful for people. ChatGPT,” OpenAI declared in an FAQ on the new Data Controls.
One example of this is the use of conversations people have with ChatGPT to further train the underlying large language models. Previously users had to fill in a form and request that chat history be disabled and this had to be done manually by OpenAI staff. The new settings put that control back in the hands of the user to turn on and off as required.
When the new business version is launched, using conversations to train models will be disabled by default. The idea, says OpenAI, is to create a subscription model for professionals who need more control over their data and enterprises wanting to manage accounts for multiple users.