View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
February 17, 2023updated 09 Mar 2023 9:45am

OpenAI to allow ChatGPT customisation

OpenAI has struggled with its generative AI tool showing political bias and hopes to change this through better fine-tuning of reviews.

By Ryan Morrison

OpenAI is upgrading its massively successful chatbot ChatGPT to make it more customisable for users, allowing them to determine how creative, cautious or aggressive it is in its responding to user queries. The company also plans to improve fine-tuning to mitigate bias in the system. One AI expert told Tech Monitor customisation and bias mitigation is vital for ChatGPT to be a viable business tool in the future.

ChatGPT has been accused of political bias which OpenAI says it will address through more fine-tuning (Photo: Iryna Imago/Shutterstock)
ChatGPT has been accused of political bias which OpenAI says it will address through more fine-tuning. (Photo: Iryna Imago/Shutterstock)

As well as releasing plans to offer ChatGPT customisation, OpenAI reportedly spent $11m on the ai.com domain name to house its large language model-powered chat application. It comes amid rumours of a dedicated mobile app and a new monthly “ChatGPT Pro” subscription.

OpenAI says it has worked to mitigate political biases in the current system but wants to allow it to accommodate and be trained towards a diverse range of views. Writing in a blog post: “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with” adding that there “always be some bounds on system behaviour”.

This fix comes down to training, fine-tuning and then retraining, the company says. “Many are rightly worried about biases in the design and impact of AI systems. We are committed to robustly addressing this issue and being transparent about both our intentions and our progress.”

To further explain this position, OpenAI shared a portion of the guidelines its human reviewers have to adhere to when working to fine-tune the dataset behind services like ChatGPT. This includes ensuring anyone working in the feedback loop does not favour any political group.

“We’re always working to improve the clarity of these guidelines – and based on what we’ve learned from the ChatGPT launch so far, we’re going to provide clearer instructions to reviewers about potential pitfalls and challenges tied to bias, as well as controversial figures and themes,” OpenAI wrote in a blog post.

Sharing reviewer information

They also plan to share aggregated demographic information on the reviews when they can do so in a way that doesn’t violate privacy rules as a demographic imbalance could lead to additional bias. This is all part of wider efforts to make the fine-tuning process more understandable and controllable – which is where the customisation options will eventually come into play.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

First OpenAI will improve the default behaviour of its chatbot so that “out of the box” it respects standard values including through research and engineering to remove “glaring and subtle biases in how ChatGPT responds to different inputs”.

Once the “out of the box” system is fine-tuned they will allow users to define AI values for ChatGPT customisation within broad bounds. “We believe that AI should be a useful tool for individual people, and thus customisable by each user up to limits defined by society. Therefore we are developing an upgrade to ChatGPT to allow users to easily customise its behaviour.”

The customisation will be restricted to prevent malicious use and there will be “hard bounds” determined by public input rather than internally at OpenAI. This, the company says is to “avoid undue concentration of power”.

Some of this input will come from users and others from academia and potentially affected groups and professions. Civil society groups are also likely to be involved, but it is also possible OpenAI will have the “bounds” enforced on it by government regulation, as the EU looks to expand its upcoming EU AI Act to incorporate conversational and generative AI.

Ravi Mayuram, CTO at Couchbase told Tech Monitor it is crucial AI bias is tackled before it can become a viable enterprise tool. “Basing generative AI models on a biased dataset could hurt organisations and lead to poor decision making based on skewed or harmful predictions – and even legal ramifications. Think AI-infused hiring practices that are biased against female applicants.”

Customisation will help ChatGPT to eliminate bias for enterprise use

“The focus for enterprises is now on how to eliminate bias. This is inherently challenging to solve as fundamental human bias is baked into the question itself. While de-biasing individuals is a superhuman effort, de-biasing AI models is a more tractable one. We need to train data scientists better in curating the data and ensuring ethical practices are followed in collecting and cleansing the data.”

Being able to customise ChatGPT and similar tools, beyond just removing bias, stands to be one of the most significant developments in business since the invention of search engines, declared Claire Trachet, tech business adviser and CEO of Trachet.

“Potential use cases for this level of customisation range vastly, from enhancing customer experience, streamlining business processes, and improving data analysis – all while heavily reducing costs. Chatbots powered by ChatGPT can provide quick and efficient responses to common customer queries, but they may lack the personal touch that customers value.

“By allowing users to adjust the level of creativity of ChatGPT, the bot can be tailored to match the tone and style of the organisation. This can help customers feel more connected to the brand, improving their overall experience with the company.”

This could then be integrated into CRMs and ERPs to provide seamless communication and collaboration between different teams, or used in marketing and advertising in a way not possible unless the end-user has greater control over its output, she explained.

“For example, a company may want a more creative or “unhinged” ChatGPT output for social media content, while using a more controlled output for advertising copy. By feeding ChatGPT with the exact criteria required, enterprises can create a consistent experience across the board.”

Read more: ChatGPT update will improve chatbot’s factual accuracy

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU