US-based AI research organisation OpenAI has released a new report claiming that ‘malicious actors’ are increasingly using its tools to spread misinformation and influence elections. The ChatGPT-maker claimed that this year alone it has already disrupted more than 20 networks attempting to use its AI models for deceptive campaigns on social media and other internet platforms.
Titled ‘Influence and cyber operations: an update’, the report added that OpenAI had neutralised actors who tried to use AI to generate content on elections in the US, Rwanda, India and the European Union. However, its authors said the threat actors’ activities had failed to draw viral engagement or build sustained audiences.
OpenAI report explains the evolution of threat actor use of AI
According to the report, the AI tools were used to facilitate several types of activities, including debugging malware, writing articles for websites, and generating content that was posted by fake social media accounts. The complexity of the use also varied, from simple content generation requests to efforts to assess and respond to social media posts.
The report noted that the threat actors primarily used the models to carry out tasks in a specific, intermediate phase of activity after they had acquired social media accounts but before the deployment of ‘finished’ products via various channels.
OpenAI said that investigating the behaviour of the threat actors during this intermediate period may provide key insights including information on internet service providers and distribution platforms.
Furthermore, the report also noted the vulnerability of AI companies as targets of hostile activity. It also cited an example where a threat actor attempted spear-phishing OpenAI employees’ personal and corporate email addresses. OpenAI added that since the release of the threat report last May, it is working to develop new AI-powered tools to detect and dissect potentially harmful activity.
“As we look to the future, we will continue to work across our intelligence, investigations, security research, and policy teams to anticipate how malicious actors may use advanced models for dangerous ends and to plan enforcement steps appropriately,” the report said. “We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with our industry peers and the broader research community to stay ahead of risks and strengthen our collective safety and security,” it added.