When security executives look deep into their security systems, they may discover that within the very structure of their systems lies a major source of risk: the server.

The typical architecture for a security system includes a host layer where the primary users interact with the system. These host terminals are connected to a server layer and form a client/server relationship, with the server providing the heart of the system.

A controller layer links the numerous hardware devices that connect the inputs, outputs, card readers and other components of the system to the server. The controllers are central managing devices that carry the database and all the intelligence for the decisions made at the device layer.

The Crux of the Situation

Some systems feature a network layer that connects the controller to the wide area network and to the server. Other designs rely on Ethernet connectivity at this level.

The problem with the configuration described above – and why it causes potential complications – is that the server is between the users and all the networked and linked devices that make up the security system. As a result, the server can become a single point of failure that can disrupt the entire system. We can all relate to server failures and the problems they cause: when an office network goes down and our computers no longer access the files we need or when our email provider crashes and we lose emails. When the server operating a security system goes down, at best, it can halt the operator’s ability to view the system. In the worst case, it can even halt a system’s functionality and total operation.

An alternative configuration offering less risk and more flexibility is a non-server-based system using a network of equalized, independent workstations sharing the same database. Eliminating the server and moving access and functions to the workstation level reduces system vulnerability to attack or failure. With multiple hosts, there is no possibility of a single point of failure, and the malfunction of one host will not affect the other hosting workstations because each operates fully independently from the others. In an emergency, the independent hosts can be quickly relocated to any point where the network exists, readily lending themselves to rapid redeployment.

Remote monitoring from across town, or even across the country, is also possible with this system. In a multi-tenant building, tenants can be provided with their own hosts for self-management, while still being restricted from accessing the entire system.

This architecture also provides additional protection from system failure. For example, because peer-to-peer and event-initiated control is distributed to the controller devices, no control functions in the system rely upon the host. This means that even if the host layer is unavailable, these functions are unaffected.

Further protection at the controller level can be secured by providing comprehensive support for battery backup and power management, with full alarm and self-monitoring capability. During power loss, for example, system functionality can be distributed into the battery-backed controller layer. Since the host layer is made up of non-client workstations, they can easily be redeployed to locations where power is available.

The peer-to-peer structure also lowers maintenance and installation costs. Often, in order to provide protection from server failure, a dual redundant configuration is used in client/server-configured systems. Servers, however, are expensive; so this arrangement drives up installation costs. System expansion poses an additional problem, because as the size of a system increases, the server, or servers, must be scaled up accordingly. Once again, this adds to the cost; cost increases even more if a dual configuration is used.

With a true peer-to-peer system, increasing availability is as simple as adding another workstation. No server upgrades or redundant arrays of independent disks, or RAID array disk farms, are necessary, nor are there any related escalating maintenance costs. Creating redundancy in this type of system is also simple and economical. Users simply provide multiple interfaces from different hosts, each backing up the other.

Non-server-based systems still provide the user’s information technology department with critical network information. In addition, non-server-based systems do not necessarily consume massive amounts of bandwidth, as some critics claim. Such systems can even be managed and have bandwidth regulated using standard network management tools, and may require as little as 186 bytes when performing system downloads.

Elimination of a server frees up the system to operate independently, which allows alarms and responses to be routed to their final monitoring station, both faster and more securely. While the client/server configuration has served the industry in the past, technology has continued to advance. It is no longer necessary to rely upon an older and increasingly marginal system while new technologies are creating whole new possibilities for system architecture, reporting and control – and ultimately building security.