With so much in the news about breaches and hacking it is easy to overlook the risks posed when it comes to testing endpoint security devices. In recent years, testing methods in the endpoint security market have evolved quite dramatically. That’s because the techniques used by adversaries and defenders in both designing and detecting advanced malware are getting increasingly diverse and sophisticated.
Defenders have to protect endpoints that support multiple OSs, each available in different releases. They have to defend endpoints on-premise as well as in the cloud. And they have to take full account of the unique ways in which many endpoint security products interact with the cloud-based elements of their solution architecture.
The good news is that the Anti Malware Testing Standards Organisation (AMTSO) adopted its first testing protocol enabling transparency in advanced malware testing methods at the end of May this year.
But what does AMTSO mean for IT security professionals? From my perspective it is certainly a step forward in helping to create additional clarity in the endpoint security market. But buyers will still need to have an eye on independent testing frameworks, peer-group recommendations, their own internal requirements and the endpoint security partnering choices rather than simply relying on AMTSO.
To help, I’ve put together a few simple guidelines buyers can follow to ensure the integrity of a security product test.
Be a Skeptic
Be wary of testing advice from vendors.
That’s right. We’re a vendor, and we recommend you proceed with caution when taking testing advice from any vendor. Always ask why they are making a particular recommendation. Most importantly, determine if the recommendation maps accurately to your organisation’s specific requirements.
Malware Samples Provided by a Vendor Aren’t Always Legitimate
This goes back to the point above about being a skeptic. Samples that come from a vendor are sometimes manipulated to favour their product over others.
In the worst-case scenarios (which I’ve seen) the samples are non-functional or not even malicious. If you get samples from a vendor, ask them for fewer samples and then lean on the vendor to tell you what’s malicious about them. (If you aren’t sure how to verify the samples, see my “visibility” recommendations below.)
If you don’t want to use malware samples from vendors, use public sandboxes and analysis blogs, such as hybrid-analysis.com, virusshare.com, malshare.com and malware-traffic-analysis.net.
These sites have repositories of authentic malware and can provide a deep technical analysis so you know how the malware should function. Focus on quality samples that demonstrate attacks you’re interested in. And remember quantity is not a good indicator of sample set quality or relevance.
Go Beyond Samples and Test How the Product Handles Real World Attacks
Malware samples alone are going to demonstrate one thing – how well the product can stop the particular malware samples in your sample set. You’re interested in stopping attacks, not just malware. Real world attackers don’t rely on packed executables. They use documents, PowerShell, Python, and Java, built-in OS tools, anything they can leverage to get the job done.
To test the solution against real-world attack techniques use a penetration testing framework such as Metasploit. Construct payloads with Veil-Evasion and use the techniques seen in real attacks. PowerShell Empire is also a great way to build PowerShell command lines and macro-enabled documents that go beyond executable malware samples. Also, turn prevention off and watch what the samples do. If you can’t see what the samples do when prevention is turned off, what will you do when a sample gets through in the real world?
Other criteria to evaluate include seeing how well the product fits in with your existing people, processes and technologies and the product’s ability to reduce attacker dwell time in your environment.
Remember the Most Effective Security Product is the One Your Team Actually Uses
Product A might have a score of 98% while Product B might have a score of 95%. The obvious choice is Product A, right? Not necessarily. A 3% delta suggests there is a difference between the products but this difference is so minor, it could be the exact opposite in the very next test.
Don’t make this difference the deciding factor. You want to deploy a product that’s usable by your team and fits into your existing security stack. Even if that’s Product B in this scenario, you’ve made the better decision.
Don’t be ashamed to pick the product that makes the most sense to your team or fills a gap in your tech stack. The independent testing authorities try to test efficacy. Only you can test applicability. If the product becomes shelfware, you will be wasting money and not doing anything to make you safer.
The Importance of Visibility, Detection and Response
You need visibility, detection, and response to reduce dwell time in your environment and I can’t emphasise this point enough.
Endpoint security products should prevent, detect, and help you respond to breach scenarios. They should be tested that way as well. There’s much more to stopping today’s attacks and empowering your team than “blocking X-thousand malware samples.”
Every prevention approach is liable to fail at some point. When it does, how will you know? Your security solution should give you information you can act on, beyond a simple malware block. When testing the solution, think about how it can be used in actual defensive scenarios when the attacker has succeeded. Does it make your life easier as a responder? Does it provide you with the visibility you need to determine the scope and impact? Does it offer insight into the tools, techniques and procedures your real-world adversaries are using?
To see the product’s visibility, detection, and response features, don’t just rely on finding a way around the prevention – turn the prevention off. If your team doesn’t find value when the prevention is off, take my word for it, it isn’t a good product.