View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

OpenAI adds persistent personality options to ChatGPT

New instruction sets will allow users to set a personality or tone of voice for ChatGPT. It can also be used to ensure it remembers key information.

By Ryan Morrison

OpenAI is rolling out a new feature that will allow ChatGPT users to set a persistent personality and instruction set for the chatbot. This will apply across all chat windows and could be used to ensure it always responds in a certain, consistent way. The company says it will be available as a beta feature to Plus subscribers in the US to start, but will roll out more widely in the coming months.

A phone displaying the OpenAI logo.
OpenAI says it removes personally identifiable information from instructions before using them to train the model (Photo by Camilo Concha/Shutterstock)

The ability to customise the personality of the AI is already available in the GPT-4 API that powers ChatGPT. This can be used by third-party developers when building their own apps and interfaces. For example, an enterprise developer can use the API to build a search interface for local documents and have it only speak in formal tones, setting restrictions on words and language to avoid.

The company says it added this ability to ChatGPT in response to feedback from users during its recent worldwide tour. One example the company gave was of a teacher using ChatGPT to help create lesson plans, but having to repeat ‘I’m a third-grade science teacher’ every time they start a new chat. Other examples include giving information on your workplace, job description or even how many people are included in a user’s family.

OpenAI says the new settings can also be used to share expertise in a specific field to avoid unnecessary explanations each time you start a new conversation with the chatbot. It could also be used by novel writers to share character sheets or feed it examples of previous writing to help it write in your own voice every time. “ChatGPT will consider your custom instructions for every conversation going forward,” a spokesperson explained. “The model will consider the instructions every time it responds, so you won’t have to repeat your preferences or information in every conversation.”

This is the latest in a line of new features added to the chatbot. In March, OpenAI unveiled plug-ins for ChatGPT that allow users to connect to outside data sources. Some of these include flight information, research papers and Wolfram Alpha. It also added additional processing capabilities such as generating graphs or curating datasets. After the success of its plugins release, OpenAI also launched browsing, powered by Bing, though this has since been withdrawn due to unspecified issues.

Safety and privacy in ChatGPT

OpenAI says it has updated safety measures to consider new ways users can instruct the model, including ensuring that instructions don’t violate usage policies. The model has also been given the freedom to refuse or ignore an instruction if it would lead to responses that violate those policies. 

The instructions given to ChatGPT will be used to improve the performance of the model, including feeding back into future training. This is a default feature but can be disabled via data controls, much like the conversations themselves. “We take steps to remove personal identifiers found in custom instructions before they are used to improve model performance,” a spokesperson said.

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

A survey in February by networking app Fishbowl found that 70% of workers were using ChatGPT at work, on company information, and not telling their employers. A more recent survey by global cybersecurity company Kaspersky found that 58% of employees were actively using ChatGPT to save time on everyday tasks at work. This could leave employers open to a variety of legal and compliance issues, warns web intelligence company Oxylabs.

“Despite their obvious benefits, we must remember that language model tools such as ChatGPT are still imperfect as they are prone to generating unsubstantiated claims and fabricate information sources,” warned Kaspersky data science lead Vladislav Tushkanov. 

“Privacy is also a big concern, as many AI services can reuse user inputs to improve their systems, which can lead to data leaks,” he added. “This is also the case if hackers were to steal users’ credentials (or buy them on the dark web), as they could get access to the potentially sensitive information stored as chat history.” 

Read more: Will AI save the UK Civil Service £100m over five years?

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU