
OpenAI has taken action against accounts linked to China and North Korea, citing concerns over the misuse of its AI technology for activities such as surveillance, misinformation, and fraudulent operations. The ChatGPT developer disclosed the removals in a recent report but did not specify how many accounts were affected or over what period the action took place.
According to OpenAI, some accounts generated AI-written news articles in Spanish that were critical of the US. These articles were later published by mainstream news outlets in Latin America under the byline of a Chinese company. In a separate case, individuals suspected of having ties to North Korea allegedly used AI to generate fake resumes and online profiles to fraudulently apply for jobs at Western firms.
Another group, linked to a financial fraud network operating in Cambodia, reportedly used OpenAI’s technology to create and translate content for automated comments on social media and communication platforms, including X and Facebook.
Alongside these account removals, OpenAI recently modified how its ChatGPT platform handles content moderation. Earlier this month, the company removed system-generated warning messages that previously flagged content as potentially violating its terms of service. Laurentia Romaniuk, a member of OpenAI’s AI model behaviour team, stated in a post on X that the change aimed to reduce unnecessary denials of user queries. Nick Turley, OpenAI’s head of product for ChatGPT, added that users would now have more flexibility in using the chatbot, provided they adhere to legal and safety guidelines.
US concerns over AI and foreign influence
The US government has previously raised concerns about the role of AI in geopolitical influence operations. Officials have accused China of using AI to suppress domestic dissent and spread misinformation abroad while warning that AI-driven cyber activities could pose security risks to the US and its allies.
The crackdown on AI misuse comes amid broader efforts by the US government to address the role of AI in cyber and influence operations. Last month, the Treasury Department’s Office of Foreign Assets Control (OFAC) imposed sanctions on entities in Iran and Russia for deploying AI-driven cyber tools in an attempt to interfere in the 2024 US presidential election. The sanctions targeted the Cognitive Design Production Center (CDPC), which is linked to Iran’s Islamic Revolutionary Guard Corps (IRGC), and Russia’s Centre for Geopolitical Expertise (CGE), an organisation with ties to Russian military intelligence.
In a separate policy shift, US President Donald Trump has repealed a 2023 executive order on AI issued by his predecessor, Joe Biden. The order had mandated AI developers to submit safety test results for high-risk systems before deployment and had tasked federal agencies with setting security standards related to AI’s role in cybersecurity, as well as chemical, biological, radiological, and nuclear threats. The National Institute of Standards and Technology (NIST) had been assigned to develop guidelines for identifying and addressing AI risks, including algorithmic bias.
Despite regulatory and security concerns, OpenAI’s ChatGPT remains one of the most widely used AI chatbots, with more than 400 million weekly active users. The company is reportedly in discussions to raise up to $40bn in funding, which could bring its valuation to approximately $300bn. In January 2025, OpenAI also launched a specialised version of its ChatGPT platform, dubbed ChatGPT Gov, tailored for US government agencies. The platform enables federal, state, and local entities to use OpenAI’s advanced AI models within a controlled environment hosted on Microsoft Azure’s commercial and government cloud infrastructure.