Why do organizations find it challenging to respond to social engineering incidents and how they can better defend against them?

We talk to Daniel Wood, CISSP, GPEN, Associate Vice President of Consulting at Bishop Fox, to find out more.

Wood leads all service lines, develops strategic initiatives, and has established the Applied Research and Development program at Bishop Fox. He has instituted service line advisory boards of in-house subject matter experts to enhance testing processes and methodologies across the firm and to ensure consistent results across both core services and emerging services. He has more than 15 years of experience in cybersecurity and is a subject matter expert in red teaming, insider threat, and counterintelligence.

Wood was previously the manager of security engineering and technology at Bridgewater Associates, where he shaped the strategic direction of technology for the firm and oversaw technical security assessments of Bridgewater’s international office expansions. He has also served in roles supporting the U.S. government in security architecture, engineering, and offensive operations as a Security Engineer and Red Team Leader. He supported the U.S. Special Operations Command (USSOCOM) on red teaming and digital warfare operations, and the U.S. Army on the Wargaming Cyber Effects on Soldiers’ Decision-Making project. Wood is currently a member of the Ithaca College Cybersecurity Advisory Board. 

He holds a Bachelor of Science in Administration of Justice from George Mason University. 

 

Security Magazine: What are some of the challenges of responding to social engineering incidents?

Daniel Wood: Many breaches occur due to social engineering; one just has to look at the latest breach of SANS, an organization that is known for its security training.  Any organization can fall victim to a social engineering attack – organizations need to understand this. 

Additionally, social engineering can be an extremely personal issue to the victims of the attack. Victims may not want to admit that they were successfully compromised through an attack for fear of reprisal from their failure; this can be seen with a stigma and gets very personal where the social psychological factor should be considered. Traditional security organizations don’t typically have this level of understanding and don’t know where to begin when building a comprehensive security program, and this usually gets left out. 

For organizations that do employ some level of social engineering protection, they usually stop at step 1, which is just adding social engineering as part of the scope of a security assessment, and they rely on this annually or semi-regularly without thinking about training and education programs, security controls, processes around identification, containment and eradication of a threat and many other things.

 

Security Magazine: Why should enterprise security revise their incident response plans to include social engineering?

Wood: Compromises can occur through a variety of means, and attacking an organization through its people targets the ‘weakest link,’ the people. Most organizations will invest heavily in technical controls and fail to think about their ‘soft’ targets. Humans are malleable and usually genuinely want to help others, attackers take advantage of this. Almost a quarter (22 percent) of all attacks included a social engineering component, with 40 percent of malware being installed via malicious email links and 20 percent via an email attachment (according to the 2020 Verizon DBIR report). Additionally, financially motivated social engineering is on the rise, as organized criminal groups are becoming more technically capable and proficient, so are their attacks.

 

Security Magazine: What are organizations doing when (or, more commonly, if) they detect an attack (via phishing, phone or physical)? Is this strategy successful? Why or why not?

Wood: The majority of organizations that I’ve seen are solely reactive focused.  After detecting a potential compromise they will engage their incident response functions to contain and eradicate any potential attacker within their network. What I don’t see them doing is taking a proactive approach and conducting an after action review (AAR) of what happened, why it happened and how to prevent it in the future. I also don’t see many organizations building robust social engineering assessment programs to proactively test their email, network and endpoint controls as well as their employees, which is troublesome.

 

Security Magazine: What are some steps organizations can take to strengthen their social engineering defensive strategy?

Wood: Employee education on social engineering techniques such as phishing, vishing, pretexting and other variants is important. But I would be remiss not to mention that training on the upstream processes that handle when an employee should report and how they should report is also key. You want to remove all doubt from an employee’s mind if they should have reported a suspected attempt. Additionally, you’ll want to ensure that your strategy doesn’t penalize users but educates them if they have fallen victim to a social engineering attack.

 

Security Magazine: What are some best practices that can be applied to enforce the strongest possible security posture?

Wood: Every organization will have their own definition of what an acceptable level of risk is and should make strong security decisions and investments backed by their risk appetite. Beyond employee training and education, organizations will want to focus on getting the basics right to ensure there are layers of controls in place to make them more resilient even if their users fall victim to social engineering.

Some examples include:

  • Ensure your organization doesn’t expose itself via open mail relays; these can increase email spoofing because they allow unauthenticated email to be sent externally to an organization, which makes it harder to defend against phishing since the emails will look legitimate to internal users. By implementing strict user authentication and IP authorization at the gateway, you can take this opportunity away from the attacker.
  • Some email security controls provide an email filtering capability that provide the ability to strip all external attachments and links to prevent execution and clicking on malicious links with drive by downloads as well as label external emails with designators such as [EXTERNAL] in the subject line and/or in the body of the email when received or put a colored bar across the email with a warning.  This will help reduce the chance of pretexting a victim as an internal user.
  • Security controls such as Cofense PhishMe, provide an email client plug-in called PhishMe Reporter, that allows an end user to submit a suspected phishing email for analysis and enables an organizations SOC to rapidly delete all occurrences of the offending email from user mailboxes to prevent additionally spread if the phishing campaign is cast with a wide net.  Other security controls have similar capabilities and should be reviewed to see what works best for the organization.
  • If you do fall victim to a social engineering attack, knowing how attackers like to operate and educating your defenders on these tactics will be helpful when they’re tasked with monitoring the networks and identifying the exfiltration of data.

More advanced examples based on the maturity of an organization’s defensive posture are:

  • Remove privileged and administrative accounts where absolutely not needed and leverage a just-in-time secrets management system; if an end user is successfully phished, it reduces how much access rights they could start off with when establishing their foothold.
  • For privileged and administrative accounts, institute a credential check-out process that requires a two-party approval process with justification review and the ability to automatically expire credential access after a set period of time.
  • Establishing user baselines with user and entity behavior analytics (UEBA) to serve as an early alert system if your endpoint controls fail, you may be able to detect an attack based on deviations from these baselines of usage and access patterns.
  • Similar to above, as you start to generate baselines of activity for users and entities, you can start to enrich your data with intelligence that will allow you to start applying machine learning with technologies and security controls through what is known as Security Orchestration, Automation and Response (SOAR). Instead of relying on a human analyst to review potential incidents, there are solutions out there that provides an automated task management approach to repeatable and mundane tasks which allows the analysts to focus on more complicated security issues and investigations. SOAR technologies provide scalability and speed to organizations that have a hard time manually identifying threats and responding to them.
  • Lastly, a no-fault social engineering testing program is a good way to test employees via phishing, and other social engineering techniques. Ensure end user profiles are created with known access rights to which assets and data. Knowing what could be potentially exposed if an end user is compromised may inform what controls you put in place and where – not all controls are equal for every user. Some users may require unique controls based on their business processes and technical aptitude, while others may not be exposed to critically sensitive information or processes.