We talk to David “moose” Wolpoff, Chief Technology Officer (CTO) and co-founder of Randori – who’s successfully broken into every company he was asked to – about Black Hats’ processes for finding and exploiting weaknesses in software.

 

Security Magazine: What is your title, and your background?

moose: My name is David Wolpoff, but I go by moose. I’m the CTO and co-founder of Randori, and we’re building a platform that automates red-teaming for businesses - organizations pay me to hack their systems so that we can expose vulnerabilities in their security perimeter, as well as help their in-house teams to handle cyberattacks when they happen. 

My background is in digital forensics, vulnerability research, reverse engineering and embedded electronic design. Before Randori, I ran "Hacker on Retainer" where we conducted determined adversary attacks for clients (which turned into Randori once I decided to automate what my team was doing).  

Prior to that, I held executive positions at Kyrus Tech, a government defense contractor, and ManTech where I oversaw teams conducting vulnerability research, forensics and offensive security efforts on-behalf of government and commercial clients.

 

Security Magazine: What is the typical process a Black Hat takes to find and exploit weaknesses in software? How successful are they in exploiting companies with this process?

moose: Just like when a company is building a product, they want to build something that is inexpensive, doesn’t take too much time, AND can likely be reused over and over again.  A hacker weighs each of these factors as they figure out what vulnerabilities to exploit:

  • How hard will this vulnerability be to exploit?
  • What access will I get if I build this exploit?
  • How widely used is the system that this exploit applies to?

A big misconception about vulnerabilities is that hackers care about the 60+ Common Vulnerabilities and Exposures (CVEs) that are reported every single day. These are all real bugs, but the truth is that many of these can’t really be exploited to do anything meaningful, or are already patched.

As an attacker, I have to cut through the noise and decide what matters. When I have a target, I keep a list of assets used in their environment and cross check it against known vulnerabilities that may apply. In some cases, these older bugs will have already been patched, but I can still go digging for the same flaw elsewhere in the code - because it’s more likely that bug isn’t patched and represents a more covert way in. 

A simple tactic is to do an audit of the open source code. Software is pushed out very rapidly in continuous development cycles. It doesn’t take too keen an eye to pinpoint the frequently-leftover tags from developers in their code that say “FIXME” or “RBF” (remove before flight). 

Your last best hope is called fuzzing: this is when you have a piece of software bombard a system with random inputs until something makes the system react strangely. When it does, you’ve hopefully identified a bug and you know you’re onto something.

 

Security Magazine: How important is it for companies to “pentest” their product/services/or program?

moose:  Pentesting means a lot of different things to different people - it often conflates application testing with networking pen-testing and in some cases a more thorough red-team engagement.  My definition of a pen-test aligns more closely with the networking / systems pen-testing, and in that case, yes, pentesting is a very important security practice, and should in many cases be routinely maintained. 

A pentest will tell you if a security system and/or control is working as it was designed to work.  Questions a pentest will answer:

  • Does my program function as it was designed to function? 
  • Is my security program doing the things I expect it to do? 
  • Does this security control I put in place work as expected?

But a pentest has its shortcomings. Pen-testing aims to find flaws, across a broad range of things; it has the breadth, but not the depth. It does a good job proving a protection is working, but not if the program behind it is working. Pen-testers go broad, they use a comprehensive public corpus of techniques, but they won’t stress a program. And, typically a pentester doesn’t go super deep because of time, budget and scope. 

Sitting on the frontlines as a red-teamer, people regularly ask me, “Should I do a pentest or hire a red team?”  

But that’s not the question they should be asking. Security leaders should be asking, “What can I do to make it more expensive for an attacker to 'pwn' me?” Because a hacker will get in.  You want to make it harder and more expensive, giving you the opportunity to find them before they get to your crown jewels. 

Expense for an attacker is defined by many factors: time to break in, cost for an exploit, complexity, time spent sitting in a network waiting, etc. Expense is increased by forcing an attacker to go through many “hoops” to get to the crown jewels and meet their objective. 

Red-teaming tells you if you’ve adequately secured the most important things to protect. (Note the word adequately, nothing is completely secure.)

 

Security Magazine: In an interview with Venture Fizz, you mentioned that attackers and defenders think much differently when solving a specific problem or vulnerability. Can you explain more?

moose: As it stands, a defender of a system will always start from a place of trying to design a secure system. They do an audit of potential weaknesses, and then attempt to go from where they are, to being “secure.” In contrast, a hacker knows you cannot be secure. They will always start from the assumption that there is a way in and accessing it is a question of how, not if. So the defender is trying to beat the hacker, but at what?

The blue-teamer and red-teamer aren’t even speaking the same language. The only common language they speak is money. How much to spend, and when and if it’s worth it. What you can do as a defender is know when your attackers get in, and make it more expensive to do so. 

As a defender, once you’re keyed into how an attacker thinks, you’re much better equipped to combat them. You’ll always be behind if you’re trying to plug every vulnerability you find; it’s more important to identify the elements of your attack surface most likely to be targeted by an adversary. In order to do that, you need to flip your perspective to understand what your valuable IP is (aka your crown jewels) and know what’s going on in your system when someone gets in. 

Just as water flows to the lowest point, hackers usually look to take the path of least resistance when targeting a victim. They want to break into a system as quietly and with as little effort as possible - and with the fewest exploits. It follows then, that in order to keep the bad guys out, you as the chief information security officer (CISO) want the path to your crown jewels to be buried behind a lot of tedious work and alarm bells.

Businesses often try to solve the noise problem with threat intelligence. Simply knowing Russia is coming after your business is beyond useless. Knowing the constituent components of your infrastructure and what they look like to a Russian hacker is invaluable. 

Organizations also assume the publicly known vulnerabilities are the ones hackers are using. Maybe, but probably not. As discussed earlier, black hats will use known vulnerabilities as guides to find similar, unpatched vulnerabilities in the same systems that are more likely to go unnoticed.