Organizations are faced with complex decisions when evaluating what products will improve network security. There are many factors that go into this type of decision of what products will improve the security of a network. Next-generation firewalls are a critical piece of network security, so they need to be carefully evaluated when purchasing. A next-generation firewall defines the latest evolution in firewalls that take traditional firewall function of packet filtering, network and port translations and stateful inspections adding additional filtering, inspecting and prevention of network traffic. Performance of a firewall while executing these functions is important in determining which product should be selected by an organization. How do you compare performance of firewalls?
When comparing firewall performance, there are several places that an organization could look to get the values. They could go to the product vendors and ask for the performance of their products directly and try to compare. One problem arises with this approach: the values that the firewall might provide could potentially not be an “apples-to-apples” comparison but an “apples-to-oranges” comparison. For example, products might report a value of number of packets thru an interface. One product might count packets by sending packets thru with a low payload. A second product may count packets that are sent with a size 64k payload. The results for these two devices would be very different based on these testing methods. This makes comparisons of results almost impossible when getting values directly from the products.
Another option for an organization when attempting to compare firewall performance results would be to run the testing on their own. First, an organization would need to figure how to benchmark a firewall. It would be inefficient to create the test cases, so it would be best to go find requirements for benchmarking a firewall.
The Benchmarking Methodology Working Group at the Internet Engineering Task Force (IETF) produced an RFC 3511, “Benchmarking Methodology for Firewall Performance” that documents methods for performance testing of a firewall such as HTTP transaction, transfer and throughput. These are useful for traditional firewalls but don't cover next-generation firewall benchmarking metrics. There aren’t any defined methods for Intrusion Detection or Prevention that a modern firewall needs to have performance benchmarked. Individual organizations would have to create their own test and make sure they cover all the possible areas of performance that might be of interest. This leads to potential holes in the testing, since it doesn’t have a wide review as an IETF document gets as it goes thru the process. Additionally, the self-testing option isn’t the most efficient use of resources for each IT department to repeat the same testing for internal use.
Third-party lab testing is a solution that allows for one lab to run the testing and give a report to a product’s company. The company can then distribute the report to its customers allowing organizations to evaluate results. Using third-party reports that allow comparisons minimizes the amount of testing that needs to be done. These third-parties create test cases and run testing on products from multiple sources producing a report with the security performance metrics. Typically, these third-parties are often neutral, which give organizations more confidence that the results are taken in fair manner. The one drawback to third party testing is that it is often closed testing which causes problems for both the product and organization.
Closed testing is when testing methodologies aren’t available to either the product being tested or the organizations that need the results. For product vendors, this leads to a certain amount of surprise when results from a testing are revealed. Often, they get different values when testing internally that don’t match the results reported by the closed testing done by a third party. This is a combination of not being involved with the testing, but also not being able to see the test methodology that was used for the testing. Products understand what configurations get optimized performance based on the environment and might try engineer the product to get better results. While this might be called “stacking the deck,” it’s still important to get the input from the product on how performance testing is executed. “Stacking the deck” means that a product vendor would only allow testing that will show favorable results. To prevent “stacking the deck,” it’s important for organizations to have access to the testing methodologies. This allows the organization to see what is tested and how it’s tested to ensure it covers the performance and security needs of their IT departments. An organization might notice an improvement when reviewing test methodology for Common Vulnerabilities and Exposures (CVE) detection. Products are easily able to detect CVEs when only the attack is sent thru the box. But what happens in the more realistic case that the box is under load when the CVEs are sent? Does it continue to detect them or does it just drop the attacks? These are examples of ways that open testing helps the entire community when making the hard choices for improving network security.
NetSecOPEN is a collection of organizations, products and third-party test labs that have a mission of working with industry to create well defined, open and transparent standards that reflect the security needs of the real world. Its first project is to focus on Open Performance Testing. Allowing for the products, organizations and third-party testers to collaborate on creating test methodologies. These test methodologies are being brought to the IETF Benchmarking Methodology Working Group to address the lack of benchmarking documents for next-generation firewalls. These types of open testing programs will allow for organizations to have “apples-to-apples” comparisons.