Six years ago, memes comparing Xi Jinping to Winnie the Pooh spread like wildfire across China’s internet before being snuffed out by the country’s censors. Creating and disseminating more sophisticated digital imagery of the honey-loving bear could now earn you a prison term in the country, as a new deepfakes law called the ‘Provisions on the Administration of Deep Synthesis of Internet Information Services’ comes into effect this week. As nations around the world mull over regulations to target one of the most disruptive media technologies in recent years, Beijing is preparing to wage a new war on any online content it considers to be a threat to its stability and legitimacy in the eyes of the Chinese people. 

China is not the only nation to consider new regulations on deepfakes. Both the UK and Taiwanese governments have announced their intention to ban the creation and sharing of deepfake pornographic videos without consent, with similar legislation being proposed in the US at the federal level (several states have already passed such laws.) The latest regulations in China, however, extend to any deepfake content, imposing new rules on its creation, dissemination and labelling – in effect, going much further in scope and detail than most other existing national legislation concerning synthetic audio and video.

China deepfake
In 2019, the ZAO app, which allowed users to create deepfakes of public figures in China, caused uproar for alleged privacy violations. Four years later, and Beijing has now imposed one of the world’s strictest regulations on deepfake creators. (Image by Ascannio / Shutterstock)

China first

Part of the reason why China has decided to press ahead with such wide-ranging regulations on deepfakes is down to its desire to set the agenda for regulating the next generation for disruptive technologies. The CCP has always been aggressive about content regulation, explains Rui Ma, a Chinese technology analyst and co-host of the Tech Buzz China podcast. “China has made it clear it wants to be a regulatory leader in emerging tech,” says Ma. “It realizes that if large markets make the rules first, those rules tend to stick and become a reference point for other countries – assuming it is well-researched and reasonable, of course.”

She adds that China is well aware of how norms surrounding emerging technologies are established through regulations, as “many of its own laws are based upon precedent in the United States and the European Union”. One of the more prominent examples is China’s Personal Information Protection Law introduced in November 2021, which largely mirrors the EU’s landmark General Data Protection Regulation. With its new regulations on deepfakes, China is taking an even greater step in establishing itself as a reference point, rather than following the lead of other jurisdictions. 

But the scope of the incoming deepfakes law goes far beyond what most people assume deepfakes are, which might include artificially created videos of public figures overlaid with audio from someone else: an eventuality the Chinese state had to contend with in 2019 with the emergence of ZAO, a popular deepfaking app shut down three days after release for privacy violations. According to China’s Cyberspace Administration, the government views deepfakes as a wide-ranging medium to conduct all sorts of crimes and mischief, from spreading ‘illegal and bad information’, to ‘harming the legitimate rights and interests of the people, endangering national security and social stability’.

But for Henry Ajder, a prominent adviser on generative AI and synthetic media, the wide-ranging scope of the deepfakes law makes sense from the perspective of the Chinese state. “Having a short-term horizon of looking at what is possible now will mean that these laws are going to be rapidly outdated,” says Ajder. “Given how long it takes for these laws to get passed, it makes sense to try and future-proof it by covering as many different kinds of synthetic as possible, particularly [given] how 2022 saw a rapid change in the accessibility of these tools.” 

Some aspects of China’s deepfakes law also mirror the areas being discussed in international regulation around synthetic content. This includes building secure data pipelines to protect user privacy, fostering algorithmic transparency to understand security vulnerabilities and how bias creeps in, and clearly labelling deepfake content – all of which has been included in the EU’s AI Act and Digital Services Act, as well as in various state and federal legislation in the US, explains Ajder. 

“Having a responsibility, either as an end user or as a platform, to label fake content is probably going to be something we’re going to have to start relying on,” he says. After all, Ajder explains, “it’s only a matter of time before more sophisticated technology comes in the form of gamified and low to no-code kinds of applications that anyone can use.”

State-sponsored reality

How exactly the CCP intends to enforce these rules at scale remains unclear. In July last year, Professor Luciano Floridi from Oxford University identified several obstacles involved in monitoring and clamping down on deepfakes in China. One of these problems relates to how ensuring these labels are created and preserved is going to be extremely difficult as ‘watermarks can be removed by re-encoding or using another AI system, and metadata or accompanying documentation can be altered or omitted’. The other sticking point in regulating deepfakes is the difficulty in making sure such content remains unavailable on the internet once it’s been flagged as problematic.

Going after such content in a country with 1.4 billion people will prove difficult even with the most sophisticated censorship regimes, according to Ajder. In 2022, more than 980 million Chinese people were classified as active social media users, higher than the total number of users in India and Indonesia combined, according to Statista. “In terms of the tech that does exist to detect and label deepfakes it is, at the moment at least, fairly unreliable,” says Ajder. “Even if it were to get to a stage of over 98% accuracy, which sounds pretty good, 2% of online content getting false positives or negatives is a huge amount of content that basically slips under the radar or gives people false confidence that something is real or false.” 

Technical issues of surveillance aside, the most concerning aspect of China’s deepfakes law is the decision to make the CCP the arbiter of what constitutes real or fake online content, says Ajder. China is, of course, not the only country to have established laws ensuring that government agencies have the final say on social media content considered as harmful or illegal. In Singapore, for example, the Protection from Online Falsehoods and Manipulation Act provides state ministers with the power to decide if a piece of online content is ‘false’, drawing criticism that the law has unfairly targeted opposition parties and activists. The South Korean government was also reportedly planning a similar law until policymakers backed down after an international outcry. 

Ajder argues that the tech community has seen a more troubling dynamic emerge with deepfakes over recent years called the ‘liar’s dividend,’ wherein states might try to use the proliferation of fake content as a “cloak of plausible deniability to escape responsibility for real content” that might threaten a government. “I could very easily see laws of this kind being used to get protester videos taken down or incriminating footage of politicians taken down that is real under the guise that it has been synthetically generated,” he says. 

By the same token, fake videos that might serve the CCP’s interests can be classified as ‘real’ and ordered not to be taken down, Ajder explains: “As soon as you have the technology which can basically simulate reality, and you have the laws in place that essentially puts the government front and centre in arbitrating what is real and what is fake content, that could potentially be a perfect storm for making any kind of online content subject to removal or publishers being prosecuted – even if the content is real or, indeed, fake.”

Read more: Will deepfake cybercrime ever go mainstream?