Mandated vulnerability assessments and security experts may make facilities less secure.

Following the aftermath of September 11 and the enactment of the Public Health Security and Bioterrorism Response Act of 2002 (PL 107-188) various water entities (water treatment and wastewater facilities) were designated as critical infrastructures thus warranting increased protection.  The Environmental Protection Agency (EPA) was assigned the responsibility to develop plans for improving water infrastructure security. 


The first step towards improving security was the mandate to conduct vulnerability assessments.  With over 50,000 critical water entities the EPA was unable to conduct these assessments themselves and opted to require that “self” assessments be completed.  

With the explosive growth of the security market, a plethora of self proclaimed “experts” emerged to assist larger entities with these vulnerability assessments.  Most of these experts traditionally had a strong background in physical security or traditional information technology (IT) security, but no direct experience or understanding of the nuances of the daily operations of water facilities.  Even fewer had any experience or understanding of the specific cyber threats and vulnerabilities inherent in SCADA systems or other aspects of the operations that affect the availability, reliability or maintainability (ARM) of the entire system.

The adage calm seas do not make expert seamen also applies to security professionals, meaning experience with only one discipline of security (physical, IT, etc) or against one type of adversary (such as terrorists) does not an expert make.

While proficient at operating the facilities, most of the in-house staff assigned to complete the assessment did not have a background or experience in security. 


What emerged were two distinct versions or approaches to conduct vulnerability assessments:  one version completed by external security experts with limited knowledge of the systems they were assessing and the other version completed by system experts with limited knowledge of security and protection.  This divide causes difficulties because both approaches failed to identify several critical vulnerabilities and threats, incorrectly assumed existing protection was adequate, and did not highlight a specific path forward or roadmap to improve security over time in a cost effective manner. The fact that a vulnerability analysis was completed (albeit poorly) gave the asset owner a false sense of security.

The shortfalls that evolved because of the two types of assessments were highlighted during a case study conducted in April 2007 of a small municipal water treatment facility.  The entity under review provided the results of their previous vulnerability assessment and allowed site visits and interviews to assess their current levels of security and make recommendations for improvement.  The utility was chosen because they utilized three different self-assessment methods and tools to complete the initial self assessment, and it was conducted without the assistance of a security or protection professional.


The case study highlights some of the pitfalls that occur when a vulnerability assessment is conducted by a person knowledgeable about the system but not intimately knowledgeable about security.

The results of the case study revealed that over 70 percent of the self assessment answers actually conflicted with what was observed by a security professional. For example, computer/server firewalls were installed but not properly configured (usually still set at the highly vulnerable factory presets), wireless networks were not properly secured, password policies and procedures did not exist (for the business as well as SCADA terminals). Most of the self assessments focused on physical security, where significant deviations were also noted such as fencing was present but contained significant holes or breaches and surveillance systems were present but placed in the wrong locations. Additionally, several high risk vulnerabilities were discovered that were not addressed in any of the previous vulnerability assessments.  Many of these concerned the ease with which sensitive customer data could be obtained and used for identity theft purposes.

The three main areas requiring attention included deficiencies in physical security, information technology security (both for business and SCADA systems), and the absence of adequate policies and procedures.  Common trends included: crediting specific security measures as being in place when in fact they were inadequate, and being unaware of specific risks and thus unaware of the need to implement additional countermeasures.  Examples of these trends include: Fencing was credited as in place at all locations when in fact all the fencing contained significant breaches, and management was unaware that sensitive customer information (name, address, SSN, credit card numbers, etc) is at risk to compromise and has not implemented any countermeasures to safeguard this data.

The self assessment did not accurately portray the security posture of the organization under review. A common thread with both versions of the assessments is that once completed the report was simply filed and no remediation or ongoing protection activities occurred.  The self assessment exercise was viewed as a regulatory paper drill and not seen as a security tool. Often, the items that were identified for enhanced protection during the assessment remained unprotected.  As a final note, after conducting an initial vulnerability assessment during the 2002-2004 timeframe, many water entities have not conducted follow-up reviews or updates to the assessments. 

By completing a self assessment many managers thought they were finished with the security requirements, and it is this false sense of completion that makes the facilities more vulnerable than ever. Additionally, having assistance from the wrong experts may be akin to calling a plumber for electrical problems. Security is a continuous process and not a quick fix.