Sign up for our newsletter
Technology / AI and automation

IBM Watson turns Sherlock Holmes in beta cyber crime programme

40 organisations from will use IBM Watson to fight cybercrime as part of a new beta programme.

Operating across fields including banking, healthcare, insurance and education, the organisations will use Watson for Cyber Security to pilot new use-cases related to their industries.

The cognitive technology will be used to bring context to the customers’ cyber security data. This will include intelligence about whether attacks are associated with known cyber crime campaigns and guidance on whether particular activity is malicious.

ibmOrganisations initially participating in the programme are Sun Life Financial, University of Rochester Medical Center, Avnet, SCANA Corporation, Sumitomo Mitsui Banking Corporation, California Polytechnic State University, University of New Brunswick and Smarttech . The total will rise to 40 in coming weeks.

White papers from our partners

“Customers are in the early stages of implementing cognitive security technologies,” said Sandy Bird, Chief Technology Officer, IBM Security.

“Our research suggests this adoption will increase three fold over the next three years, as tools like Watson for Cyber Security mature and become pervasive in security operations centres. Currently, only seven percent of security professionals claim to be using cognitive solutions.”

A recent survey by IBM Institute for Business Value found that 60 percent of those surveyed believed that cognitive technologies will mature quickly enough to slow down cyber criminals in the near future.

7 percent said that their organisations were currently in the process of implementing cognitive security solutions while 21 percent said that they would implement these solutions over the next 2-3 years.

Rebekah Brown, ‎Threat Intelligence Lead at Rapid7, said:

“This will likely result in the identification of attack trends and patterns that would not be easily identifiable through individual intelligence analysis alone.

“We have to be careful, however, not to rely exclusively on automation and machine operations to combat a thinking, changing adversary. While machine-learning algorithms are effective at identifying and predicting attack patterns based on what has previously been observed, it is always possible that an attacker will take actions that are not predictable or that do not fit with previous behaviour patterns.

“Automated analysis tools should be viewed as just that, tools, not as a complete replacement for human analysis.”
This article is from the CBROnline archive: some formatting and images may not be present.