Defense Is Hard

So here we are in a new office with a new view (it has a window!) with some thoughts to share. There always seems to be random discussions related to offensive vs defensive tools and techniques. This often invokes cliche truisms such as “the attacker only has to be right once”, which is actually a terrible defense (pun intended) as its become quite obvious over the past few years attackers need to chain together several wins in order to achieve total compromise… but I digress. A good attacker knows he or she can always win given enough time and resources. This is because attackers understand their game a lot better than most defenders. People in the role of defender don’t always seem to understand their goal is ultimately undecidable. You could never answer with absolute certainty that your systems are invulnerable. Most people would never attempt to make this argument for fear of being cast out into the unkind world of public mail spools. Yet time and time again we see those in charge of defense rolling out solutions like network based browser exploit detection. Seriously? You think you can detect and stop an exploit written in a turing complete language that is consumed by an application that performs arbitrary computation as described by an untrusted source just by reassembling and parsing a couple of network packets? Furthermore that device assumes it understands how every browser it protects will interpret that potential exploit. The very premise is laughable.

So the problem is clear, defenders are stuck solving an infinite set of problems while their attackers enjoy the luxury of solving just a handful. So one solution for defenders is to increase the number of problems the attacker has to solve while decreasing their own.

“But you just said my problems are infinite!” – Yes I did, but in the real world we can begin to tackle this by common sense measures such as pen testing regularly, implementing proper authentication mechanisms and, above all else, reducing attack surface. Why defend more than you have to? So now you’re %80 of the way there but unfortunately that last %20 is the part you can’t really solve with off the shelf solutions and its where most of your pain is coming from.

So how do we begin to increase attacker problems? Sure you can run up to date software and firewalls and that network appliance in your data center that apparently solves the halting problem… or you could begin to think differently and push undecidability onto your attacker at the moment he tries to exploit your application. The best problems we can throw at an attacker have an element of unpredictability at their core. A good example of this is the protection provided by ASLR (Address Space Layout Randomization). It forces the attacker to solve additional problems, mainly “where is my data” and “where is their code”. We’ve got to think beyond this if we are to raise attacker costs and lower our own. Ask yourself where and how you can introduce unpredictability along the path your attacker is most likely to take. Most enterprises are forced to run a homogeneous network, this is just the cost of business and it’s not going to change anytime soon. Even if you could switch up those applications, they can be fingerprinted and attacked individually. But theres still room to introduce unpredictability in this scenario. There are lower level components you can modify that are harder to fingerprint remotely. For years I’ve recommended running applications with high attack surfaces (web servers, IM clients, browsers etc…) with a different memory allocator than the default system allocator. This introduces an element of unpredictability much like ASLR and breaks a number of assumptions most exploit writers are forced to make. Its not perfect but its a lot harder for an attacker to fingerprint let alone predict. I’m not alone in doing this of course. Ask the best vulnerability/exploit researcher you know, he/she probably does the same thing out of a healthy fear of predictability. This isn’t just advice for end users, its the same advice I give to developers. Study the exploits attackers are writing for the vulnerabilities in your application and then reduce the predictability they are relying on.

Each problem your attacker solves gives him more insight into how to solve the next. This is old news to you if you’ve ever declared game-over on a pentest that started with nothing more than a good information leak. As defenders we’ve got to keep weighting the scale with unpredictability until it tips in our favor. For each additional problem we push onto the attacker our cost goes down and his goes up. Right now the scale is heavily weighted in his favor because we rely on snake oil solutions. So before you make that quarterly presentation to your CISO that includes exported data from that network appliance first try to answer the question: “What new problems have we pushed at our attackers recently?”

Leave a Reply

Name *
Email *