Salesforce’s annual Dreamforce conference got under way on Tuesday, with founder Marc Benioff outlining his company’s latest artificial intelligence-powered advances. Benioff appears to be trying to position Salesforce as a trustworthy provider of enterprise AI amid widespread worries from tech leaders that large language models (LLMs) and generative AI are too risky for their businesses to deploy.

Marc Benioff delivers his keynote at Dreamforce 2023. (Photo by Matthew Gooding/Tech Monitor)

Benioff delivered an opening keynote to delegates at the conference in San Francisco, where he and his team delved into the company’s new Einstein 1 AI platform, which will allow users to automatically collate, sort and analyse large amounts of data from different sources. Benioff was later joined on stage by OpenAI founder Sam Altman, who discussed his company’s work and the challenges facing AI developers.

Salesforce wants to build a trusted AI platform for its customers

Taking to the stage after singer/songwriter Dave Matthews had serenaded delegates with a rendition of Something to tell me, baby, Benioff outlined his company’s position on how AI will be used by businesses. “We’ve recognised something very important – we’re in this AI revolution but it’s going to impact who we are and how we operate and it will bring us back to our core values,” he said.

But he said he is realistic about the quality of the output that the current batch of generative AI systems, such as OpenAI’s ChatGPT, can produce. “These systems are good, not great,” Benioff said. “They can give answers that aren’t exactly true. You can call them hallucinations, but I call them lies. These LLMs are very convincing liars.”

Concerns about how enterprise data will be used by AI developers to train their models are also holding back adoption, Benioff added. “So many CIOs say ‘I’m not ready to turn over my data to these LLMs to make their systems smarter’,” he added.

These concerns appear to be well founded. As reported by Tech Monitor, research released by chipmaker AMD last month found that more than half of 5,000 IT decision-makers polled globally said their businesses were not ready to adopt AI systems because of concerns over data security. Samsung is one of many companies to have banned its employees from using chatbots like ChatGPT because sensitive information may be used to train models.

Salesforce thinks it has the solution via Einstein 1’s trust layer, which supposedly stops data from being exposed or used in training AI. Later in the keynote, the company’s CTO Parker Harris explained how this works in practice, enabling businesses to use commercial LLMs via the Salesforce platform, without putting information at risk. Through Einstein 1, information is masked before being sent to an LLM like OpenAI’s GPT-4 so that it can safely generate insights for the end user. This information is then returned and unmasked. The developers are “never going to keep your data, suck it up and train on it”, Parker said.

Benioff and his team appear to be banking on their strong existing relationships with large businesses to convince customers they can be trusted when it comes to AI, too. “We’re not looking at your data, your data is not our product,” Benioff said. “We’re here to make you better, more productive and more successful.”

OpenAI’s Sam Altman at Dreamforce: AI hallucinations and more

Later at Dreamforce, Altman joined Benioff for a fireside chat. On the subject of hallucinations, the OpenAI CEO was, perhaps unsurprisingly, more relaxed about systems producing erroneous outcomes.

Stating that there are “a lot of technical challenges” to prevent hallucinations, Altman said: “One of the non-obvious things is a lot of the value from the systems relates to the fact they do hallucinate.”

He said that while you can “already look stuff up in a database, creativity is a lot of the power” that AI systems hold. Altman continued: “If you just do the naïve thing and ask a system to never say anything that it’s not 100% sure on, then you lose the magic.”

Altman said his team is beginning to see “intelligence emerge from very complex computer programming”, but warned: “We can predict with confidence GPT will get more powerful but we don’t know how or why a new capability might emerge. We going to be very unprepared for major new things that happen.” Because of this, he said oversight would be required from policymakers. AI legislation of various types is currently in development by governments around the world.

When asked which country was ready to lead the way on AI, Altman added: “I think the US will be the greatest leader,” explaining “We [the US] are blessed to have things in our favour, but this will be a truly global effort.”

Salesforce provided Tech Monitor’s travel and accommodation for Dreamforce

Read more: FTC launches probe into OpenAI