Salesforce’s governance chief has commended the EU for its artificial intelligence legislation, the EU AI Act, as he set out the company’s approach to how powerful AI tools should be regulated.
Eric Loeb said it was a “tall order” to expect that one global set of regulations to govern AI would emerge, but that the way the EU’s legislation was being drawn up meant it would be able to evolve as the technology develops.
Regulation of artificial intelligence has become a hot topic among policymakers in recent months after the popularity of ChatGPT among businesses and consumers sparked an AI revolution, with tech’s biggest names rushing to integrate the technology. The EU AI act was approved by the European Parliament earlier this year and is now subject to negotiations with member states before it comes into effect.
The EU AI act is a ‘commendable’ step towards AI regulation
Some European businesses, including Siemens and Airbus, have criticised the act for being too prescriptive. But speaking at Salesforce’s Dreamforce conference in San Francisco on Wednesday, Loeb, who is the company’s executive vice president for government affairs, backed the EU’s approach.
“The EU AI act is going to be ongoing discussions and policy development with what is rapidly evolving, and that’s a good thing,” he said. “I commend the leadership of the EU on this, a risk-based framework which differentiates the high risks from the low risks.”
Indeed, Loeb believes emerging AI regulation should take a differentiated approach to the various systems on the market. “Where we’re focused is to ensure approach has nuance, it’s not one-size fits all,” he said when asked about how his company approaches conversations with politicians. “You need to think about different contexts in the whole AI ecosystem and differentiate things that are high risk from those that are low risk. You have different frameworks for different needs, that’s the dialogue we’re engaged in.”
This week Salesforce joined other tech companies including IBM and Nvidia in signing the Whitehouse AI safety pledge, a voluntary scheme where some of the biggest US tech companies have agreed to do what they can to mitigate the risks of AI.
Loeb believes such frameworks can be a precursor to formal legislation. “I expect we’ll see more of embracing of voluntary commitments with the intention that these will ‘glide path’ to regulation,” he said. “It’s a tall order to think of a global singular approach, but I think concentric circles of regulation will emerge.”
Salesforce provided Tech Monitor’s travel and accommodation for Dreamforce.