When discussing cybersecurity, a color can make all the difference. I recently spoke with Christopher Camejo, Director of Threat and Vulnerability Analysis, for NTT Com Security, about the differences between a white hacker, blue hacker and black hacker, and a red penetration test.


What is the difference between a White, Black and Blue “Hat?”

Black Hat hackers are what most people outside of security think of when they hear the unqualified word “hacker.” These are the “bad guys” who are trying to find vulnerabilities and breach networks in order to steal or alter data or just crash systems, all for malicious purposes, usually financial gain.

White Hat hackers are essentially the good guys; they use the same techniques as the Black Hats to find vulnerabilities in software and networks, except instead of exploiting them for financial gain or other malicious purposes, they report their findings to the affected parties in order to get these flaws fixed. Some of these White Hats work as security researchers or penetration testers for governments or companies while others are just independent security researchers looking to make the world a safer place in their free time (often they are both).

Unfortunately, there are often misunderstandings when a White Hat tries to report vulnerabilities: companies outside the tech sector seem to have a hard time comprehending that some random guy or girl on the Internet is capable of finding vulnerabilities in their products is willing to help them fix those problems out of the kindness of their heart. The knee-jerk reaction is often to dismiss these reports or treat the individual as if they were a criminal.

“Blue Hat” refers to White Hat consulting firms that can be brought in to test for security vulnerabilities in products. This is something I personally wish there was more of, as software is spreading into more products every day (the much used buzzword “Internet of Things”), and the companies outside the tech sector trying to build these products don’t seem to understand what they need to do to harden them against hackers.

As with many things in life, there is a large grey area between White Hat and Black Hat hackers, the area that we call Grey Hats. These are hackers whose activities fall somewhere between “helping” and “hurting.” They may break into systems simply out of their own personal curiosity, neither seeking personal gain nor to report the vulnerabilities they found so they can be fixed, or they may brag publicly about their exploits to gain credibility and embarrass organizations with weak security. Many hackers started in this mode before maturing into White Hat hackers or succumbing to the Black Hat motives, and some continue to maintain a healthy streak of Grey Hat curiosity even after becoming what would widely be considered a White Hat.

The word “hacker” has many definitions, and there’s one more worth mentioning because it is the original meaning of the word (in a technology context at least): It’s someone who likes tinkering with technology, making it do new and interesting things that it may not have been designed to do, and pushing the limits of what is possible. This is what we mean when we talk about the hackers in Silicon Valley changing the world. It’s a shame that the public perception of the word “hacker” has gained such negative connotations as a result of its association with the criminals we call Black Hats.

 

How can penetration testing improve with a “Red Team?”

Red Team assessments are essentially a special case of penetration testing conducted over a much wider scope and timescale. Where a classic penetration test would typically focus on direct technical attacks against a few critical systems over the course of a few hours to a few days, a Red Team assessment would open the door to a much broader range of target systems and attack vectors, like social engineering attacks, and would be conducted over the course of weeks or months.

The use of alternative attack vectors in a Red Team context helps eliminate the blind spots inherent in classic penetration tests. Often, the easiest way to breach a critical system isn’t to attack it directly from the Internet; rather it is to compromise some other less-well-protected system, either directly or via social engineering, so that the critical system can be attacked from inside the network perimeter.

Spreading the simulated attacks out over time also allows the Red Team to conduct more research about their target, looking for more obscure vulnerabilities, other ways into the network, or to give social engineering attacks or password cracking the time they require to be effective. The expanded timeframe also allows the Red Team to be stealthy once they gain access to the network, providing a real test of the target organization’s ability to detect and respond to an ongoing attack.

It’s worth pointing out that the Target breach is exactly the type of attack that a Red Team assessment would simulate and help an organization prepare for: they were breached through a third-party contractor rather than directly from the Internet. The attackers spent weeks preparing to steal customer data, and Target’s own monitoring systems detected the attacker’s malware without eliciting any effective response from its security personnel. Perhaps a Red Team assessment would have alerted them to the failures in their processes so they could have been addressed before the real attack.

 

What’s the most surprising penetration testing case that you’ve worked on?

When you deal with everyone’s security failures day in and day out it makes you a bit cynical. It’s at the point now where I expect that we will be able to break everything that we get our hands on, and the only thing that surprises me anymore is when someone actually does something right and we can’t break it.

Many years ago I did a job at a hospital, what we would call a Red Team assessment today. I simply walked in the front door, sat down in the lobby across from the security officer and plugged into a spare network jack in the back of a VoIP phone. It turns out that everything in the hospital was attached to one big “flat” network so I could scan anything and everything for vulnerabilities from the jack on the back of that phone. It didn’t take long to find a database with no administrator password, so I connected to it.

Inside the database I found an old copy of a patient database from the mammography department, complete with medical records and radiology images, score one for a HIPAA violation. More interestingly, I found the live database that controlled all of the electronic door locks in the building. By changing a few characters in the database I could lock and unlock doors at will or program them to unlock at certain times. Also in there was the live database that tracked the security officer check-ins as they made their rounds of the hospital at night.

Not content with being tethered to a phone in the lobby, I looked up one of their doctors on the public website and called the IT helpdesk and pretended to be him, complaining about not being able to get on the wireless network. Whatever security questions the helpdesk asked me were in his profile on the website, and they very helpfully walked me through the Wi-Fi connection process over the phone, including providing me with the password. The Wi-Fi network provided access to everything that the phone jack had access to, including the unprotected database, so now we were mobile.

The end-game came a few hours later when all the hospital pharmacy personnel had gone home for the night. We were able to wander up to pharmacy with our laptop and, with no security officers in sight, unlock the door for a few seconds to let ourselves in. I have no idea what the street value of all the narcotics in that room was, but we took our pictures for the “evidence” file and left.

At no time did we have to exploit any sort of real technical vulnerability, crack any passwords, or anything like that. It was all just a matter of looking for the holes and using them to our advantage.

Yet another example, at a hedge fund’s offices, I was able to get from the parking garage to their server room after-hours simply by yanking or shoving on locked doors (four of them between the garage and the goods). They all used electromagnets to hold the doors shut when they were “locked,” none of which were actually strong enough to do their job. It’s usually not THAT easy to get in.

 

Is there one area or sector that you see experiencing the most or fewest cyber breaches, and why?

Retail has been getting hit hard lately, simply because they handle payment cards, and those card numbers are valuable for fraud purposes. Retail also tends to operate on slim margins, which means not a lot of money or focus on security. That makes them an easy target.

As health records have become more valuable for insurance fraud, we are starting to see an increase in attacks on hospitals and other healthcare providers. We can expect these attacks to increase dramatically over the next few years, and it should be especially concerning as healthcare is another sector that is often publicly funded and has little money for security.

A new and growing development over the past few years has been attacks on small and medium businesses, focused on a combination of capturing their banking credentials in order to clean out their bank accounts via wire transfer, using forged emails to trick accounting personnel into initiating wire transfers, or on encrypting their files and holding the decryption key for ransom (“ransomware”). Yet again, small and medium businesses are a sector that tends not to have a lot of money set aside for security, and they don’t have the mature business processes or backups that would provide a defense against wire fraud or ransomware.

The exception to this is that the finance industry itself always seems to be on top of the security game, despite their customers who are often taking a beating. This may be in part because banks have always been a target of thieves, so maintaining security is more ingrained in the culture, but financial institutions also have the ability to move liability around. When a small businesses account gets emptied out via wire fraud it is usually the small business left with the loss, not the bank, and the liability shift associated with the move to EMV (chip) cards in the U.S. is pushing payment card fraud losses down to the merchants as well.

The simple fact is that organized crime is a business; in order to be successful they must seek to gather the most valuable information for the least cost. That means stealing things that can quickly be turned into money (bank credentials, payment card numbers and health records) from those least equipped to defend against them: anyone with a small security budget relative to the value of the data they’re trying to protect.