The American Public Transportation Association (APTA) is a standards development organization, which is implementing new safety and security standards in the transit industry.


The American Public Transportation Association (APTA) is a standards development organization, as well as a coordinating body for over 4,000 transportation operators in the U.S. and Canada. One of APTA’s latest initiatives is the development of new standards in the area of safety and security to assist its many operator members who run trains, trams, light rail vehicles, buses and ferries throughout North America.

A major part of any safety and security system is the security video system and the spread of these systems throughout the various modes of transport has caused APTA to implement a technical standards working group (TSWG1) to pull together a baseline standard for operators to use when developing and procuring video systems and associated video analytics add ons.

"The new standard has been in development now for about 18 months and covers all aspects of the security video systems, including camera resolution, frame rates, compression systems as well as recording systems architectures (mobile and Network-based) and wireless connectivity,” said chairman Dave Gorshkov.

A NEW STANDARD

TSWG1, as it is known in APTA, is chaired by Dave Gorshkov from Digital Grape, a specialist technical and management support consultancy, with many years of experience in developing and implementing various kinds of covert and overt surveillance and recording systems.

“The new standard has been in development now for about 18 months and covers all aspects of the security video systems, including camera resolution, frame rates, compression systems as well as recording systems architectures (mobile and Network-based) and wireless connectivity,” said Gorshkov. “When I started to review current security video standards, all I found were a series of outdated ‘recommendations’ with no hard design recommendations or practical performance measures for modern digital systems.”

Operators implementing a new security video system can now use the document to aid in the development of a systems design, including testing procedures, to ensure that the recorded images meet the design requirements.

Areas covered include camera minimum resolution requirements, which are suggested as being no lower than 4CIF. Older systems have tended to use lower resolution cameras in order to maximize recording times. The result has been that poor quality images are often recorded on equally poor recording systems and the only time this is discovered is when a recorded image needs to be reviewed. To that end, a good deal of emphasis has been placed on a systems design approach, including why you need the security video system, in what areas and to do what. In other words a “threat analysis” needs to be done before designing the system to understand the areas of vulnerability. This leads to the designing stage of a security video system with the appropriate number of cameras of the correct resolution and frame rate and using a compression system that allows for minimum resolution recordings.

The new standard also suggests that each camera location be specifically designed to achieve an objective of “monitoring, detecting, recognizing or identifying” a target or situation. Each of these “modes” are then able to be designed with an appropriate camera resolution, lens and frame rate set to meet the design requirements. Frame rates are also identified for use in transit with emphasis on the appropriate use of higher frame rates used when observing moving scenes, such as looking forward in a train or bus. And similarly, when observing specific areas you may want to use higher or lower frame rates depending on the threat. Some situations may call for dual rates, say five and 15 frames per second, triggered by some external situation such as an alarm or passenger call button.

COMPRESSION COMPREHENSION

Compression systems are also covered within the new document. In order to allow for the various versions of MPEG2, MPEG4 (H263 and H264) as well as MJPEG to be used, reference is made to the appropriate “profile and I frame latency” for effective use of compression, rather than an overbearing use of compression.

Obviously transmission systems need to be taken into account as do the latest IP-based cameras available, the Rotakin test system is recommended to test the overall performance of the security video systems recordings. (Rotakin was developed by the UK police to test overall security video systems performance.)

Recording periods are also suggested for images: seven days retention for mobile vehicles and 31 days for control rooms. However, some agencies will have its own policy in this area which will differ, up or down, from these guidelines.

Security of the image and its “chain of evidence” is not forgotten with digital signatures and computer hashing being recommended rather than image watermarking which interferes with the image.

Overall the new standard is a comprehensive guide to designing, developing and operating a security video system. The transit aspects are not overbearing and hence any security video designer considering installation specifications should use this document. Video analytics will be the next area of standards that TSWG1 drafts and that document will use the security video standards document as a baseline for use with various types of software for use in facial recognition, abnormal behavior detection as well as left package detection.

SIDEBAR: Compression 101

The method of compression and decompression plays an essential role in the transmission, display, storage and retrieval of security video as well as with video analytics.

Security Magazine asked Jason Spielfogel of IQinVision to present his view. Here is what he shares with you and his own staff.

Temporal compression techniques (MPEG4, H.263 and H.264) provide advantages over “frame-by-frame compression” techniques (MJPEG standard) when there is a fixed background and/or not a lot of motion in the scene. In these applications, they can really compress images (small file sizes) with very little loss of image quality.

Temporal compression also makes it easier to synchronize audio with video.

Temporal compression offers little or no advantage over frame-by-frame compression when there is a lot of motion in the scene, when the background is changing (like with a mechanical PTZ camera), or where anything higher than D1 (0.4 megapixel) resolution is required. People often mistakenly assume frame-by-frame techniques are bandwidth hogs; however, a competent network designer will always take into account worst case scenarios and would have to consider the impact of a lot of motion with both temporal and frame-by-frame techniques in which case, both techniques would consume about the same bandwidth so there is no real bandwidth advantage for either.

Frame-by-frame techniques on the other hand deliver a consistent file size, making it easier to predict bandwidth. They also provide higher image quality, as they don’t suffer from the same “compression artifacts” or “compression blur” found with temporal compressions when there is motion. This allows frame-by-frame techniques like MPJEG to deliver much higher quality and consistent images in scenes with a lot of or when a mechanical PTZ camera is being used.

Finally, a last issue with MPEG4 is standardization.

While there is a published standard for MPEG4, in reality this standard is never 100 percent adhered to; so most MPEG4 formats are not alike. Companies who try to integrate with them have to do extra work and in some cases, pay a licensing fee. Since MJPEG is a single, strictly adhered to standard, it is much more easily integrated into network video recorders (NVRs) and other systems.