There seems to be a lack of standards swirling around the digital video recorder (DVR) sector. While the marketplace perpetuates some of the confusion, other misunderstanding stems from the terminology itself and in the evolution of video recording.

Most DVR makers talk of systems offering 160, 240 or 480 frames per second (fps). However, most end users who have worked with analog recorders know that broadcast quality does not exceed 30 frames per camera.

As security operations continue to grapple with the shift from analog to digital, many will need to understand and compare past technology with current offerings. How will an 8-channel, 480 fps DVR perform compared to a multiplexer and VCR? What is the appropriate quality measurement?

Come to think of it, what is a frame, and why is it 30 fps in the first place?

A frame is a term that has a long history in video and movie production but does not really translate well to the DVR market. Nevertheless, it is a term that does not seem to be going away any time soon.

Calculating frames

To understand how a frame is calculated, it is helpful to understand the term’s roots. It originally referred to an actual frame, or single picture, on a movie reel.

Broadcasting technology was the first development to complicate the matter as early technology could not display an entire frame at once. The nature of the technology required that a picture be displayed in alternating groups of lines, or fields, which were determined by broadcast frequency.

The 60 Hz AC power common to North America would, at that time, create “hum bars” scrolling down the television unless the field rate matched the power input rate. This meant that televisions needed to broadcast at 60 fields per second. Broadcasters quickly noticed that the fields were roughly half the size of a frame, and that measurement became the standard. Video signals were sent out at 30 fps, and a field would thereafter be defined as exactly one half of a frame.

This process of displaying alternating lines, or fields, is known as interlacing. But while interlacing fixed the strobe effect that often occurred with early television sets, it created a problem when translating frames from 24 fps motion pictures to 30 fps broadcast television, and the problem snowballed when translating to digital formats.

Back to the original question: If broadcast quality, real-time recording is 24, 25, 29.97 or 30 fps, why talk about a system with 480 fps?

One answer is that this number represents the total number of frames that the system produces when displaying 16 cameras. But can systems really deliver this kind of performance? Most cannot. But to show why, it is necessary to understand how video passes into and through a computer, starting with the capture card.

Capture cards

Digital recorders use two basic types of capture cards: real-time and non real-time. Most DVR systems today use the latter type capture cards, which are designed to capture video and convert the analog signal to digital. Once captured and converted, the data is sent to the system’s CPU where it is compressed.

Because these recorders rely on the CPU to process and compress the video, they tend to quickly “max-out” the CPU’s processing power and thus limit the number of cameras and frames that they can record. These capture cards have one major benefit: They are cheap.

Non real-time capture cards have many different configurations, or hardware structures. The configurations can be summed up as either shared or independent. A single card with four camera inputs shares a data path among all four cameras. That is, the video signal from each camera passes through a multiplexer, where it is combined into one data stream. Then it passes to the video codec, where it is digitized and sent through the Bus and on to the PC processor, where it gets compressed.

In any DVR, all data passes through the Bus, and motherboard makers go to great lengths in designing better ways to manage more data using this path. They compete on their ability to do it faster and more efficiently. But even with today’s lightning Bus speeds, there still are constraints.

For instance, a typical board can support up to133 MB, but it can typically only handle about 80 MB with any reliability. Sixteen cameras recording 30 fps at 320 x 240 would produce a data stream greater than 105 MB. Add that to other applications and functions managed by a PC and it becomes clear that such massive influxes of data quickly would overwhelm the PC. With DVRs recording multiple cameras using shared data streams, the Bus is a constraint that prevents the recorder from capturing high frame rates and/or high quality video.

Independent path designs

Obviously, an easy means of working around this problem is to limit the number of camera inputs per video capture card. To do this properly, there are cards that produce independent data paths for each camera. Each card captures one camera and converts the analog signal to digital. After passing through the video codec, the data stream moves through the Bus and on to the processor where the video is compressed.

Using independent data paths, some systems have achieved 30 fps. However, only high-end recorders have Bus speeds and processors fast enough to achieve this capture rate. In fact, recording at 30 fps, current PC processing technology really limits the number of cameras to eight. Even then, system performance can be compromised when either the Bus or processing capacity is reached.

A more recent and innovative solution is to split the data path for recording and monitoring. These newer cards take a single input for a camera, convert each analog stream to digital, and then split the path. One data stream goes directly to the Bus and onto the PC processor to be compressed. The other data stream goes first to a video scaler, then to the Bus and onto the PC processor for compression.

The reason for the dual stream is to provide “real-time” video even if that video is reduced in quality. What these cards do differently from systems in the examples above is record the first data stream at 320 x 240, but they do so at only 240 fps. The second data stream goes to the scaler, where it is reduced to 160 x 120. However it maintains the 480 fps frame rate. This allows the DVR to deliver a fast frame rate to the monitor, but record at a slower frame rate. Of course, the quality of the frame rate might be less than acceptable.

Even at this rate, the data flow from both streams may be a little high for the system, so many cards are really only capturing 400 or 320 fps for monitoring and 160 frames for recording. Today, this type of card is most widely used on the low end DVRs.

Although there are many solutions to getting data through the system, the primary barrier remains the same: the Bus. The processor manages the compression, and it is located on the other side of the Bus. So then why not move the processor?

Simply putting the processor before the Bus within the DVR system is not an option, but it is possible to compress the video before it reaches the Bus. This is done with a specialized chip known as a digital signal processor (DSP).

All true real-time cards use DSPs for compression. For the most part, DSP technology is commonly found in devices such as mobile phones, handheld video recorders, CD players, hard disc drive controllers and modems. With the arrival of digital broadcasting, they are quickly finding their way into television and now into DVRs.

Squeeze the data

The most important function of a DSP is signal compression/decompression, which allows a cellular provider to squeeze more calls into a single cell or a telecom provider to increase the bandwidth of existing communication lines. Quite simply, a DSP is the workhorse of compression. And it is fast, allowing for simultaneous recording and displaying of audio or video.

With this structure, the Bus is no longer a barrier. Nor is processing power. In fact, these systems use a little more than one tenth the processing power of a traditional DVR – even at the higher frame rate. And, because system resources no longer are used to manage the DVR functions, other applications can be added to the system for increased functionality. The major drawback to the DSP, however, is its cost.

However, the DVR specifications that DSPs allow are very attractive and likely will push higher performing recorders.

Sidebar: Performance Drivers

There are four primary factors that directly impact digital video recorder system performance:

  • Data rate
  • Hardware structure
  • Thermal generation
  • Software reliability