View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

China’s new generative AI rules are ‘about state control’ not user safety

The rules say providers of generative AI tools must ensure output is compatible with socialist values.

By Ryan Morrison

China has published a set of draft regulations to govern the use of generative artificial intelligence technologies like ChatGPT and image generators such as MidJourney. The rules place greater responsibility for accuracy on the developer of the AI model than similar rules proposed in the EU, US or the UK. One expert told Tech Monitor the rules are more about ensuring state control than protecting users.

China’s generative AI guidelines place a greater burden on the provider of the technology. (Photo by Koshiro K/Shutterstock)

Published by the Cyberspace Administration of China (CAC), the draft measures set out ground rules for the use of generative AI and the content they can and cannot generate. This includes ensuring any output aligns with the “core values of socialism” and does not subvert state power.

Chinese companies responded quickly following the surprising success of OpenAI’s large language model-based natural language tool ChatGPT, which was released in November 2022. Alibaba, Tencent, Baidu and others have all announced plans to open access to their own large language models and incorporate chat technology into their own apps.

The Chinese government has also shown an interest in generative AI, declaring a need for it to be at the heart of the country’s economy. Officials from the Science and Technology Ministry said it attaches “great importance” to development of AI and that it “has wide application potential in many industries”.

Western-built AI tools like ChatGPT are banned in China, leading to a flurry of home-grown alternatives but the new rules are designed to ensure what comes out of the tools reflects the views and position of the communist party.

China’s algorithmic transparency rules

This isn’t the first time CAC has published guidelines for the use of AI or algorithms. The regulator has previously placed a requirement on social media companies to publish details of their algorithms, including how they make decisions on what videos to show or products to recommend.

These new rules place the burden on the developer or provider of the AI model rather than the end user. It includes ensuring any data used to train the model doesn’t discriminate against ethnicity, race or gender and that it does not produce false information.

Content from our partners
An evolving cybersecurity landscape calls for multi-layered defence strategies
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways

Any new generative AI product will also need to go through a security assessment and publish the same transparency information for the algorithm as seen in social media services. There is also no difference in the level of safety and security requirements between direct-to-consumer and direct-to-enterprise tools.

Moderation rules in the guidelines place a requirement on providers to ensure content is consistent with “social order and societal morals”, doesn’t endanger national security, avoids discrimination, is accurate and “respects intellectual property”.

The assessment provisions govern internet services including public forums, streaming and search. Service providers have to self-assess or engage a third-party agency where they look for verification of real identities of users, how personal information is protected and content is reviewed internally.

Data submitted into the system by end users has to be protected, as well as activity logs and providers aren’t allowed to use that data for user profiling or sharing information with third parties. CAC says end users can report a provider to them if content being generated doesn’t comply with the draft measures.

This opens that provider up to a series of potential penalties under the Personal Information Protection Law, Cybersecurity Law and Data Security Law. This could include fines, having their service suspended and criminal investigations against the executives. If the tools do generative content that goes against the guidelines then companies are given three months to update the model, retrain data and ensure it doesn’t happen again. Failing to do so could open them up to having their services removed. and large, but unspecified fines.

China AI rules: financial penalties can be incurred

Louisa Chambers, Partner at law company Travers Smith told Tech Monitor the new regulations have some similar foundations to those elsewhere in the world, in that there are concerns around the increasing proliferation and sophistication of AI. “For example, we are all concerned that, if not used with safeguards and checks, AI can entrench and legitimise bias and discrimination – and all the draft legislation that we are starting to see published worldwide seeks to address this,” she says.

Chambers says the other similarity is in the need for transparency as all governments want some degree of openness from business over how they use AI and how it is being trained, but the approach in China is different to that of the UK and the EU.

“The EU draft AI Act and the UK’s recent white paper both show a desire to use AI to support innovation whilst at the same time protecting individuals from unfair or unduly invasive AI processes.  By comparison, the focus set out in the recent draft measures in China is to ensure that generative AI does not create content which is inaccurate, or which is inconsistent with social order,” Chambers adds

However, Lillian Edwards, professor of law, innovation and society at Newcastle University believes the policy is about control. She says China is interested in reining in its private tech industry over “well-founded fears” it could outstrip the capacity of the state to control and monitor citizens.

“This legislation echoes previous laws such as the one on recommender algorithms in naming vague social goals that providers must comply with on pain of penalties,” Edwards says. “These goals are clearly not fully operationable; but there have already been enforcement actions under the recommendation algorithms laws so they are not to be disregarded either.”

The West and China have different approaches, where China wants to shackle its tech industry “the West is largely scared” of doing the same, Edwards argues. “At least the EU is protecting the fundamental rights of citizens,” she says. “In China, arguably neither of these motivations apply and the main aim is to protect state power.”

Read more: China’s generative AI revolution is only just beginning

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU