Generative AI is increasing the scale and effectiveness of cyberattacks against the private sector and threatens to overwhelm small and medium-sized businesses (SMEs), a group of cybersecurity experts told a US Congressional committee yesterday. Figures from IBM, Hitachi, Protect AI and SentinelOne said in the hearing investigating the role of Cybersecurity & Infrastructure Security Agency (CISA) and the Department of Homeland Security in securing artificial intelligence that the plethora of threats facing businesses has increased thanks, in part, to the rapid popularisation of AI applications – not only among companies in the private sector but also among cybercriminal organisations.
SMEs are “not doing so hot” when it comes to protecting themselves from these types of attacks, said SentinelOne’s chief trust officer, Alex Stamos. “We’re kind of losing” the battle, he told lawmakers.
In a wide-ranging testimony, Stamos argued that smaller companies were struggling to defend themselves against hackers from gangs such as BlackCat and LockBit, claiming that these gangs now boast specialised capabilities he had previously witnessed only being used by Russian intelligence agencies. Future attacks could involve AI-enabled malware capable of being dropped into unfamiliar systems and intuitively identifying and exploiting vulnerabilities in critical national infrastructure. “My real fear,” said Stamos, is that “it will be able to intelligently figure out, “Oh this bug here, this bug here” and take down the power grid – even if you have an air gap.
Stamos also decried the recent incident reporting requirements imposed on US companies by the Securities and Exchange Commission, stating that they complicated effective cyber-defence by mandating that hacked firms report breaches within two days (“Usually at 48 hours you’re still in a knife fight with these guys,” said Stamos). Indeed, the reporting process was prematurely hijacked by the cybercriminal gang BlackCat last month, when it announced that it had reported a company it had hacked to the SEC for failing to report the breach in good time.
New standards needed to thwart cyberattacks on AI
ProtectAI’s chief executive, Ian Swanson, also warned committee members that systemic security issues relating to AI and machine learning (ML) services also needed to be addressed through collective action by companies leading in that space. “Manufacturers and consumers of AI systems must put in place systems to provide the visibility they need to see threats deep inside their ML systems and AI applications quickly and easily,” said Swanson.
The ProtectAI founder also warned that a “SolarWinds moment” may be imminent for ML applications, referring to the massive software supply chain attack in 2020 that stemmed from a single breach of the software developer. As such, Swanson recommended that a new standard be created for a “machine learning bill of materials” to help spot the security flaws unique to ML products and services, in addition to the federal government increasing its investment in standardised security protocols and best practices for open source AI/ML software.
IBM Consulting’s vice-president for global cybersecurity, Debbie Taylor Moore, also urged greater focus on the part of politicians and CIOs on shoring up education about cybersecurity, as well as the resilience of businesses they have been targeted by hackers.
“Bad things are going to happen,” said Moore, adding that what matters is how a firm rebuilds after the devastation wrought by a data breach. “When you look at the solutions that are in the marketplace in general, the majority of them are on the front end of that loop. The back end is where we really need to look toward how we prepare for the onslaught of how creatively attackers might use AI.”