AMTSO, the cybersecurity industry’s testing standard community, has published its first Guidelines for Testing of IoT Security Products.

As Internet of Things (IoT) security product testing is still in its infancy, according to AMTSO, the guidelines aim to provide guidance for independent benchmarking and certification of IoT security solutions.

Comprised of input from testers and vendors, the guidelines cover principles for testing IoT security products and provide recommendations on test environment, sample selection, testing of specific security functionality, and performance benchmarking for testers.

The Guidelines for Testing of IoT Security Products include:

  1. General principles: All tests and benchmarks should focus on validating the end result and performance of protection delivered instead of how the product functions on the backend. Thus, the guidelines suggest that no difference in rating should be made between products that use, for example, machine learning or manufacturer usage descriptions as long as the outcome is the same.
  2. Sample selection: The guidelines provide guidance for challenges with choosing the right samples for IoT security solution benchmarking. For a relevant test, testers need to select samples that are still active and target the operating systems smart devices are running on. The guidelines also suggest that the samples could be categorized between industrial and non-industrial, with further separation into operating systems, CPU architectures, and severity scores.
  3. Determination of “detection”: IoT security solutions work very differently than traditional cybersecurity products when it comes to detections and actions taken; for example, some solutions will simply detect and prevent a threat without notifying the user. The guidelines suggest using threats with admin consoles that the tester can control or using devices where the attack will be visible if conducted. Another alternative could be observing the device ‘under attack’ via network sniffing.
  4. Test environment: In an ideal case, all tests and benchmarks would be executed in a controllable environment using real devices. However, the setup can be complex, and if the tester decides against using real devices in the testing environment, it is advised that they validate their approach by running their desired scenario with the security functionality of the security device disabled and checking the attack execution and success. The guidelines also advise using alternatives to real devices, like a Raspberry Pi, to mimic a real IoT device, and creating bespoke IoT malware samples, like Mirai, for testing of malware never seen before.
  5. Testing of specific security functionality: The guidelines embrace advice on different attack stages, including reconnaissance, initial access, and execution. They outline the possibility of testing each stage individually vs. going through the whole attack simultaneously. Choices on this should be documented in the testing methodology. Also, the guidelines suggest platform-agnostic testing to be considered as many threats today target multiple architectures and can be used for IoT and non-IoT devices alike.
  6. Performance benchmarking: The guidelines also provide considerations on performance benchmarking, e.g., suggesting to differentiate between various use cases such as consumers vs. businesses or the criticality of latency or reduced throughput per protocol, which depends on its purpose.

“Guidelines for security and privacy, in general, are what drive industry regulations like PCI, HIPAA, and SOX, recognizing the need to protect access to sensitive data and systems in traditional IT environments,” says Tony Goulding, Cybersecurity Evangelist at Delinea“Similarly, it’s important to protect access to IoT devices that are used in sensitive environments. With no equivalent set of regulations, the AMTSO guidelines represent a step in the right direction to help IoT vendors test their products’ ability to detect and prevent attacks.”

Many IoT devices are managed by the line of business, which does not normally have staff, training, or budget to achieve true IoT security. This line of business may not have the budget to replace obsolete yet functional devices, says Bud Broomhead, CEO at Viakoo“When a device goes end of life from the manufacturer, there are no new security patches, yet threat actors are constantly adding new exploits against them.”

To fix this issue, security leaders should ensure that budget is “available to replace IoT devices that have gone EOL is an important process point and should foster regular communication and coordination between the CISO, IT, and IoT line of business owners so that when a crisis hits lines of communication are already established and functioning,” Broomhead recommends.

Having metrics to guide program improvements will help security leaders focus on what needs improvements, Broomhead suggests. “For example, every organization should track how long it takes to apply an IoT firmware patch, how many IoT devices fail to have certificates updated on time, and if password policies are being enforced on all devices,” he says.

Guidelines for Testing of IoT Security Products, other guidelines, and standard documents are available for download at: