Malicious cyber actors have increasingly leveraged web shells to gain or maintain access on victim networks. According to the U.S. National Security Agency (NSA), web shell malware is software deployed by a hacker, usually on a victim’s web server, that can execute arbitrary system commands, commonly sent over HTTPS. To harden and defend web servers against this threat, NSA and the Australian Signals Directorate have issued a dual-seal Cybersecurity Information Sheet (CSI). The guide contains valuable information on how to detect and prevent web shell malware from affecting Department of Defense and other government web servers, though the guidance would likely also be useful for any network defenders responsible for maintaining web servers.

Web shell malware has been a threat for years and continues to evade detection from most security tools, says the NSA. Malicious cyber actors are increasingly leveraging this type of malware to get consistent access to compromised networks while using communications that blend in well with legitimate traffic, which means attackers might send system commands over HTTPS or route commands to other systems, including to your internal networks, which may appear as normal network traffic, adds the NSA. 

The CSI contains detection techniques, along with links to signatures and lists maintained on GitHub. The report also highlights prevention techniques and recovery guidance. NSA encourages network defenders who maintain web servers to review this technical guidance and apply the mitigations as appropriate.

Below, you'll find detection and prevention, and response and recovery strategies from the CSI. 

Mitigating Actions (DETECTION)

Web shells are difficult to detect as they are easily modified by attackers and often employ encryption, encoding, and obfuscation. A defense-in-depth approach using multiple detection capabilities is most likely to discover web shell malware. Detection methods for web shells may falsely flag benign files. When a potential web shell is detected, administrators should validate the file’s origin and authenticity. Detection techniques include:

“Known-Good” Comparison

Web shells primarily target existing web applications and rely on creating or modifying files. The best method of detecting these web shells is to compare a verified benign version of the web application (i.e., a “known-good”) against the production version. Discrepancies should be manually reviewed for authenticity. Additional information and scripts to enable known-good comparison are available in Appendix A and are maintained on

When adjudicating discrepancies with a known-good image, administrators are cautioned against trusting timestamps on suspicious systems. Some attackers use a technique known as “timestomping” [6] to alter created and modified times in order to add legitimacy to web shell files. Administrators should not assume that a modification is authentic simply because it appears to have occurred during a maintenance period. However, as an initial triage method, administrators may choose to prioritize verification of files with unusual timestamps.

Web Traffic Anomaly Detection

While attackers often design web shells to blend in with normal web traffic, some characteristics are difficult to imitate without advanced knowledge. These characteristics include user agent strings and client Internet Protocol (IP) address space. Prior to having a presence on a network, attackers are unlikely to know which user agents or IP addresses are typical for a web server, so web shell requests will appear anomalous. In addition, web shells routing attacker traffic will default to the web server’s user agent and IP address, which should be unusual in network traffic. Uniform Resource Identifiers (URIs) exclusively accessed by anomalous user agents are potentially web shells. Finally, some attackers neglect to disguise web shell request “referer [sic] headers”1 as normal traffic. Consequently, requests with missing or unusual referer headers could indicate web shell presence. Centralized log-querying capabilities, such as Security Information and Event Management (SIEM) systems, provide a means to implement this analytic. If such a capability is not available, administrators may use scripting to parse web server logs to identify possible web shell URIs. Example Splunk®2 queries (Appendix B), scripts for analyzing log data (Appendix C), and additional information about detecting web traffic anomalies are maintained at 

Signature-Based Detection

From the host perspective, signature-based detection is unreliable because web shells may be obfuscated and are easy to modify. However, some cyber actors use popular web shells (e.g., China Chopper, WSO, C99, B374K, R57) with minimal modification. In these cases, fingerprint or expression-based detection may be possible. A collection of Snort®3 rules to detect common web shell files, scanning instructions, and additional information about signature-based detection are maintained at

From the network perspective, signature-based detection of web shells is unreliable because web shell communications are frequently obfuscated or encrypted. Additionally, “hard-coded” values like variable names are easily modified to further evade detection. While unlikely to discover unknown web shells, signature-based network detection can help identify additional infections of a known web shell. Appendix D provides a collection of signatures to detect network communication from common, unmodified or slightly modified web shells sometimes deployed by attackers. This list is also maintained at

Unexpected Network Flows

In some cases, attackers use web shells on systems other than web servers (e.g., workstations). These web shells operate on rogue web server applications and can evade file-based detection by running exclusively in memory (i.e., fileless execution). While functionally similar to a traditional Remote Access Tool (RAT), these types of web shells allow attackers to easily chain malicious traffic through a uniform platform. These types of web shells can be detected on wellmanaged networks because they listen and respond on previously unused ports.

Additionally, if an attacker is using a perimeter web server to tunnel traffic into a network, connections would be made from a perimeter device to an internal node. If administrators know which nodes on their network are acting as web servers, then network analysis can reveal these types of unexpected flows. A variety of tools including vulnerability scanners (e.g., Nessus®4), intrusion detection systems (e.g., Snort®), and network security monitors (e.g., Zeek™5 [formerly “Bro”]) can reveal the presence of unauthorized web servers in a network. Maintaining a thorough and accurate depiction of expected network activity can enhance defenses against many types of attack. The Snort® rule in Appendix E and maintained at can be tailored for a specific network to identify unexpected network flows.

Endpoint Detection and Response (EDR) Capabilities

Some EDR and enhanced host logging solutions may be able to detect web shells based on system call or process lineage abnormalities. These security products monitor each process on the endpoint including invoked system calls. Web shells usually cause the web server process to exhibit unusual behavior. For instance, it is uncommon for most benign web servers to launch the ipconfig utility, but this is a common reconnaissance technique enabled by web shells. EDRs have different automated capabilities and querying interfaces, so organizations are encouraged to review documentation or discuss web shell detection with the vendor. Appendix F illustrates how Sysmon’s enhanced process logging data can be used to identify process abnormalities in a Microsoft® Windows®6 environment. Similarly, Appendix G illustrates how auditd can be used to identify process abnormalities in a Linux®7 environment. Guidance for these identifying process abnormalities in these environments is also maintained at

Other Anomalous Network Traffic Indicators

Web shell traffic may exhibit other detectable abnormal characteristics depending on the attacker. In particular, unusually large responses (possible data exfiltration), recurring off-peak access times (possible non-local work schedule), and geographically disparate requests (possible foreign operator) could indicate URIs of potential web shells. However, these characteristics are highly subjective and likely to flag many benign URIs. Administrators may choose to implement these detection analytics if the baseline characteristic is uniform for their environment.


Mitigating Actions (PREVENTION)

Preventing web shells should be a priority for both internet-facing and internal web servers. Good cyber hygiene and a defense-in-depth approach based on the mitigations below provide significant hardening against web shells.

Prevention techniques include:

Web Application Update Prioritization

Attackers sometimes target vulnerabilities in internet-facing and internal web applications within 24 hours of a patch release. Update these applications as soon as patches are available. Whenever possible, enable automatic updating and configure frequent update cadence (at least daily). Deploy manual updates on a frequent basis when automatic updating is not possible. Appendix H lists some commonly exploited vulnerabilities.

Web Application Permissions

Web services should follow the least privilege security paradigm. In particular, web applications should not have permission to write directly to a web accessible directory or modify web accessible code. Attackers are unable to upload a web shell to a vulnerable application if the web server blocks access to the web accessible directory. To preserve functionality, some web applications require configuration changes to save uploads to a non-web accessible area. Prior to implementing this mitigation, consult documentation or discuss changes with the web application vendor.

File Integrity Monitoring

If administrators are unable to harden web application permissions as described above, file integrity monitoring can achieve a similar effect. File integrity software can block file changes to web accessible directories or alert when changes occur. Additionally, monitoring software has the benefit of allowing certain file changes but blocking others. For example, if an internal web application handles only Portable Document Format (PDF) files, integrity monitoring can block uploads without a “.pdf” extension. Appendix I provides a set of Host Intrusion Prevention System (HIPS) rules for use with McAfee®8 Host Based Security System (HBSS) to enforce file integrity on web accessible directories. These rules, implementation instructions, and additional information about file integrity monitoring are maintained at

Intrusion Prevention

Intrusion Prevention Systems (IPS) and Web Application Firewalls (WAF) each add a layer of defense for web applications by blocking some known attacks. Organizations should implement these appliances to block known malicious uploads. If possible, administrators are encouraged to implement the OWASP™9 Core Rule Set, which includes patterns for blocking certain malicious uploads.

As with any signature-based blocking, attackers will find ways to evade detection, so this approach is only one part of a defense-in-depth strategy. Note that IPS and WAF appliances may block the initial compromise but are unlikely to detect web shell traffic. To maximize protection, security appliances should be tailored to individual web applications rather than using a single solution across all web servers. For instance, a security appliance configured for an organization’s content management system can include application specific rules to harden targeted weaknesses that should not apply to other web applications. Additionally, security appliances should receive updates to enable real time mitigations for emerging threats.

Network Segregation

Network segregation is a complex architectural challenge that can have significant benefits when done correctly. Network segregation hinders web shell propagation by preventing connections between unrelated network segments. The simplest form of network segregation is isolating a demilitarized zone (DMZ) subnet to quarantine internet-facing servers. Advanced forms of network segregation use software-defined networking (SDN) to enable a Zero Trust10 architecture, which requires explicit authorization for communication between nodes. While web shells could still affect a targeted server, network segmentation prevents attackers from chaining web shells to reach deeper into an organization’s network. For additional information about network segregation, see Segregate Networks and Functions [7] on

Harden Web Servers

Secure configuration of web servers and web applications can prevent web shells and other compromises. Administrators should block access to unused ports or services. Employed services should be restricted to expected clients if possible. Additionally, routine vulnerability scans can help to identify unknown weaknesses in an environment. Some host-based security systems provide advanced features, such as machine learning and file reputation, which provide some protection against web shells. Organizations should take advantage of these advanced security features when possible.


Mitigating Actions (RESPONSE and RECOVERY)

While some web shells do not persist, running entirely from memory, and others exist only as binaries or scripts in a web directory, still others can be deeply rooted with sophisticated persistence mechanisms. Regardless, they may be part of a much larger intrusion campaign. A critical focus once a web shell is discovered should be on how far the attacker penetrated within the network. Packet capture (PCAP) and network flow data can help to determine if the web shell was being used to pivot within the network, and to where. If such a pivot is cleaned up without discovering the full extent of the intrusion and evicting the attacker, that access may be regained through other channels either immediately or at a later time.

The full guide is available at