Difficult economic times have made security even harder. Maintaining a security program can quickly get overwhelming and priorities stray from where they need to be. As fears of a recession loom, there’s concern that budgets will shrink and security teams will be forced to do more with less.

According to a survey by JumpCloud, 44% of small- and medium-sized enterprise decision-makers believe their organizations will cut cybersecurity spending this year. Few organizations have deep enough pockets to afford a “red team” that constantly surveils the corporate network and assets. Many organizations are turning to security automation, which IBM estimates can save 65% in the cost of a breach. To lay a foundation for automation tools, security leaders need to go back to basics and re-think perspective when it comes to security. 

Limiting the scope of assets

Relationships between internal assets and external ones from third-party vendors complicate how vulnerable an internal infrastructure is. The scope of what organizations need to protect is typically larger than what they realize. If security leaders performing vulnerability assessments only look at known internal assets, they’re putting blinders on and ignoring potential entry points that can easily be exploited. 

Security leaders need to arm themselves with an understanding of their perimeter so they can categorize it and ensure proper vulnerability scanning, because assets that go unchecked are the ones that attackers use to enter a network. The world is centered around web applications, and software development has inched closer to security even though the person responsible for securing an organization may not have a background in that field. According to CDNetworks, there were 62.89 million web application attacks per day in the first half of 2022. 

Misinterpreting priority

If a vulnerability is found, consider multiple factors when determining how quickly to patch it. It’s not always about how critical the vulnerability is. Too many organizations miss what seems like common sense because they feed all of these different assets through the same tools and the same teams, which remediate vulnerabilities in the simplest methods without considering priority.

Forgetting old projects

Large organizations are often shocked when a vendor comes in to do an attack surface reconnaissance and finds a litany of projects that were spun up five or 10 years ago and simply left there without being taken offline. There’s a risk associated with any internet-facing asset, whether it’s being used or not.

It’s difficult to keep an accurate inventory of all active assets or attempt to project when they should be sunset. Limit risk by keeping an index of who is responsible for each project, so those users can periodically review whether those assets are necessary to remain active. 

This is a more thoughtful approach to attack surface management, which has traditionally meant just limiting the number of exposed assets within an organization. Don’t want to take away critical assets from users, but only keep what’s necessary to limit risk and make security easier.