![](https://www.techmonitor.ai/wp-content/uploads/sites/29/2025/02/shutterstock_1308197251-428x241.webp)
The European Commission (EC) has withdrawn three major regulatory proposals concerning technology patents, AI liability, and consumer privacy in messaging apps. The decision, outlined in the Commission’s 2025 work programme, comes after significant opposition from industries and technology firms. According to Reuters, the EU executive determined that no agreement was likely among lawmakers and member states, leading to the withdrawal of the proposals.
One of the scrapped proposals aimed to regulate standard essential patents (SEPs), which are used in telecom equipment, mobile devices, computers, connected vehicles, and smart technology. Initially introduced to address lengthy legal disputes and licensing costs, the regulation faced resistance from patent holders and technology firms relying on these patents.
The SEP regulation sparked a divide between companies with essential patents and firms requiring them for product development. Companies such as Nokia, Ericsson, and Qualcomm opposed the draft rules, arguing that they would disrupt licensing frameworks. On the other side, businesses including Apple, Google, and automakers supported the proposal, seeking a more structured approach to royalty payments.
Following the withdrawal, Nokia welcomed the decision. “It would have had an adverse impact on the global innovation ecosystem, in particular the incentives for European companies to invest billions of euros each year in R&D,” said the Finnish firm.
However, the Fair Standards Alliance (FSA), which represents companies such as BMW, Tesla, Alphabet’s Google, and Amazon, criticised the move. “The withdrawal sends a terrible signal to innovative businesses who rely on a predictable and fair SEP licensing system,” the FSA stated. The organisation warned that, without regulation, businesses could face uncertainty regarding licensing terms.
The FSA also expressed concerns that scrapping the SEP framework could weaken European firms’ access to 5G and other essential technologies, increasing reliance on external markets. The group had previously urged the Commission to establish a transparent and predictable system for licensing standardised technologies.
Another regulation abandoned by the Commission was the AI Liability Directive, proposed in 2022. The directive sought to allow consumers to claim compensation for damages caused by AI-related failures or omissions by developers, providers, or users of AI systems.
The Commission has indicated that it may revisit AI liability regulations at a later stage, depending on how existing AI laws influence the market.
A third regulation, which would have imposed stricter privacy obligations on messaging services such as Meta Platforms’ WhatsApp and Microsoft’s Skype, has also been withdrawn. The proposal, initially part of the ePrivacy Regulation introduced in 2017, aimed to bring messaging apps under the same legal framework as telecom providers regarding data privacy and user tracking.
Disagreements among EU member states over data tracking provisions and child protection measures stalled the proposal. The Commission acknowledged that no agreement was expected and deemed the draft regulation “outdated in view of some recent legislation,” Reuters reported.
US Vice President JD Vance calls for looser AI rules
With regulatory debates intensifying, the US has taken a contrasting approach to AI governance.
At the recent AI Action Summit in Paris, US Vice President JD Vance cautioned European nations against imposing stringent regulations on AI, warning that such measures could slow AI innovation. He emphasised the importance of fostering technological growth and maintaining a competitive edge in AI development.
This contrasts with the European Union’s (EU) AI Act, which critics argue imposes stringent requirements that could hinder innovation across the bloc. Industry leaders, including Capgemini’s CEO Aiman Ezzat, have expressed concerns that the EU’s regulatory framework may increase compliance costs and slow the adoption of AI technologies.
“In Europe, we went too far and too fast on AI regulation,” Ezzat told Reuters. “It’s complex for us because we have to look at regulation in every country where we operate, what we can do, what we cannot do, and what’s our responsibility as a developer.”
Regulatory differences were further underscored when the US and the UK declined to sign an AI governance pledge backed by several EU nations, citing concerns over excessive restrictions.