The Spanish government has approved a new legislative measure aimed at regulating AI by imposing significant penalties on companies that do not adequately label content created by AI, reported Reuters. This legislation is part of efforts to combat the misuse of deepfakes which refer to manipulated media that can mislead viewers.

“AI is a very powerful tool that can be used to improve our lives … or to spread misinformation and attack democracy,” said Spanish Minister for Digital Transformation and Civil Service Oscar Lopez, while announcing the bill.

Spain is one of the first EU member states to implement these regulations, which are considered more rigorous than the regulatory framework in the US, where compliance is largely voluntary and varies by state. Lopez noted the widespread vulnerability to “deepfake” attacks, which can involve altered videos, images, or audio that are presented as genuine.

The proposed bill, pending approval from the lower house of parliament, classifies the failure to properly label AI-generated content as a “serious offence.” Companies that do not comply could face fines of up to €35m (about $38.2m) or 7% of their global annual turnover. The legislation also prohibits the use of subliminal techniques that could manipulate susceptible individuals, with Lopez providing examples such as chatbots that may encourage gambling addiction or toys that prompt children to engage in risky behaviours.

Additionally, the bill restricts organisations from using AI to classify individuals based on biometric data, behaviour, or personal characteristics to determine access to benefits or assess the likelihood of criminal activity. However, the legislation allows for the use of real-time biometric surveillance in public spaces for national security purposes.

Spain’s newly established Artificial Intelligence Supervisory Agency (AESIA) will oversee the implementation of the new regulations. However, specific areas such as data privacy, crime, elections, credit ratings, insurance, and capital markets will be managed by the relevant regulatory bodies. This move aligns with the EU’s broader efforts to standardise AI regulations.

EU AI Act set to enhance safety standards and foster innovation

In August 2024, the EC officially brought the EU AI Act into force. The Act aims to ensure that AI systems developed and utilised within the EU comply with safety and ethical standards designed to protect fundamental rights. The regulation seeks to create a unified internal market for AI across the EU, promoting the adoption of AI technologies while fostering an environment conducive to innovation and investment.

As part of its digital strategy, the EU aims to regulate AI to establish better conditions for the development and application of this technology.  The enforcement of the AI Act will be overseen by the Commission’s AI Office at the EU level, with member states required to designate national authorities by 2 August 2025 to ensure compliance and conduct market surveillance.

Read more: High-risk AI systems face ban in EU as AI Act enforcement takes effect