Is the End the Beginning: Where Are We in The Endpoint Journey?

Dec 29, 2015
8 minutes
3 views

Over the last 18 months, there has been much discussion of the “new endpoint”; and, whilst no one wants to be the first to, I suspect 2016 will be the year many start to make some level of endpoint changes. The question is: which of the many new concepts will become the new endpoint standard, and will it really be new or just a twist on existing techniques and concepts?

Today the threats are complex, often made of multiple facets, and easily tuned so each instance looks unique. It’s not surprising, then, that people talk about the death of antivirus. Yet the reality is that most antivirus relies on a multitude of techniques to discover attacks, in additional to the founding method.

In 1991 I started working for Dr. Solomon’s antivirus, which aimed to detect and block attacks, based on the concept of getting a sample and writing a pattern matching rule to block further instances. I remember the founder saying that you can create as many variants as you like but, fundamentally, we have solved the problem; subsequently he sold the company, seeing no further future.

As I walked around RSA 2015, there was a plethora of new endpoint solutions available introducing ever-more-creative techniques, be they various iterations of sandboxing techniques, mathematical analysis, statistical anomaly detection, or new forms of behavioral detection. The question today is: what are the techniques that are effective; and is there a clear winner, or is it a blend of the right concepts?

Here are my top tips to consider as you assess what is the right solution for you:

1. Behavior vs. pattern – Traditional pattern approaches had high accuracy ratios but, with increasingly unique attacks, can be too slow. Many new solutions look for behaviors at some level. It could be the exploit techniques used, the changes the attack makes to the system, or the communications techniques used.  The challenge is finding behaviors that are common to the attack and rarely, if ever, seen in other circumstances. What is the acceptable ratio for your business between detection and false alerts? How much data will the solution generate; how long would it take you to analyze this?

2. Where & how is the analysis done? – Doing behavioral analysis, there is a balance between real-time versus offline. Simple behavioral matches take limited resources; but, for example, complex statistical analysis (e.g., looking for behavioral anomalies) takes more computation resources, so will likely have more impact on the system or be throttled, which may mean they are near real-time. Sandboxing, typically, is also near real-time, but will depend on the complexity of the environment it needs to emulate and the resources available to dynamically instigate the virtual session, gather the indicators of compromise, and convert them into a blocking control. You will need to decide if you want this load on the client or outsourced to a central, dedicated system, be that in the cloud or on-premise. One advantage of having a dedicated system for sandboxing is, unlike normal endpoints, you don’t have the same concerns of results being tainted if the end system is compromised.

3. What changes and what stays the same (attack attributes) – Each time an attack is instigated, there is typically a blend between constants in the attack and parts that are altered to avoid detection. For example, most commonly changed are the actual attack binary (in an aim to avoid pattern matches) and, where used, the structure of the email it’s delivered in. What changes less frequently are the exploits used, the communications with the attacker, and the underlying infrastructure behind these. If you are going to look for behavior, it makes sense to look for what remains more constant than those attributes that are typically dynamic.

4. Attack lifecycle – isolated component or big picture – When antivirus started the attack was just a single binary file. Today’s attacks can include a multitude of components, which make up the lifecycle of the attack. All too often we look for specific attributes in isolation, which leads to high false positives (imagine trying to uniquely identify someone using only the fact that the person has brown hair). Much like a human photo fit, the more we can join the different aspects together, the more accurate the result typically is. As such, when looking for your next endpoint, you need to consider not only does it look for different aspects in the lifecycle but, critically, does it look at these in a correlated, single-pass, automated process; otherwise, all you create is huge volumes of false positives.

5. Usability – automated or heavy human input (setup and ongoing) – This is often the overlooked but most critical aspect. Skills shortage is the biggest limiting factor that puts us behind the attacker. Typically, attacks are limited by CPU and network speed; yet, if our security requires human input, we will always be too slow. When considering any new endpoint there are three aspects to consider.

  • How many man-hours of effort are required to deploy the solution and tune it to a usable state?
  • What are the typical man-hours required to get actionable events from the solution?
  • How much human effort is required to take the action and apply it across your business to all the solutions where it should be applied?

Lab tests all too often fail to identify this; and, whilst the product may do “as stated on the tin,” in practical terms – when deployed en masse – can become unusable.

6. Stand-alone versus platform – The more we automate, the better we keep pace with the attacker. Any new endpoint needs to natively integrate with your other security solutions. As one component discovers a threat, as many of your existing solutions as possible should be able to dynamically benefit from this and vice versa. Intelligence is the glue linking consolidated actions; but this must be timely, machine readable, and actionable with confidence. Otherwise we reinsert human process that doesn’t scale.   How many solutions will be able to interoperate with your new endpoint as a common, automated platform?

7. Measure of success – Before you consider any change, you need to recognize there is a gap in your current capabilities. How do you qualify this? You could look at the number of detections you missed, but that is subjective of you finding everything your existing controls did miss. For me one of the truest measures is time to detect. Today we too often find attacks too slowly. Time to detect is a measure we can both test and monitor on an ongoing basis. It also allows us to qualify what resources are required to meet such an ongoing objective. Ultimately this will dictate when the right time is for you to evolve your endpoint strategy.

8. Replacement or complement? – It’s easy to say antivirus is no longer viable, but the reality is that, typically, we don’t use just antivirus. Typically, we leverage consolidated endpoint security suites that can include, as examples, DLP, firewall, and encryption with antivirus at the core, making it harder to remove just the antivirus component. Likewise there are millions of known threats still in the wild today that antivirus does block every day, either from known patterns or the behavioral capabilities most include. As much as we want to reduce the load security adds to the endpoint, the decision we each must make is whether new endpoints are a replacement or a complement. I suspect, for many, the goal is to replace, but this will be a phased transition where, initially, both may run in parallel, whilst confidence is built into the new solution, with the long term being to remove capabilities that don’t meet your measures of success.

Today there are both pressures from our own endpoint security deficiencies and impending new legislation in the EU pointing towards state-of-the-art capabilities. Wiki defines this as "the highest level of general development, as of a device, technique, or scientific field achieved at a particular time."

Antivirus has been around for close to three decades, in which time there has been a lot of innovation in new techniques to detect and block attacks, some of which started as add-ons to antivirus. As the attack has evolved, many of these concepts have spun out to become solutions in their own right. Each organization must consider just how much regard they need to have for leveraging state-of-the-art endpoint capabilities.   The scope of endpoint solutions available has never been broader. The decision each business must make in 2016 is: which endpoint techniques provide the capabilities to prevent against today’s and future attacks?

 


Subscribe to the Newsletter!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.