Much has been written about the significant bandwidth and storage savings H.264 provides when compared to MJPEG or MPEG4 Part 2. A related topic is the various ways H.264 can be configured and the resulting impact to image quality. Resolution, lighting, scene activity, bit rate, rate type, I-frame interval and compression all dramatically change how image quality is captured, transmitted and stored. This creates a “balancing act” between network bandwidth controls and image quality that isn’t easy to execute successfully because of the wide range of parameters in use. Some network camera deployments are overly concerned with bandwidth or storage consumption and consequently limit the cameras’ bit rate, resulting in low image quality. Other deployments are not concerned with bandwidth and allow an unlimited bit rate. However, they leave the cameras’ default compression setting, causing the same effect: poor image quality.
Image Size (H)/(V) Versus Image Quality
When asked “What makes up image quality?” the majority of professionals in the industry will say resolution, or the number of megapixels. Resolution is the leading image quality measurement; however it doesn’t translate directly to image quality. Resolution is the easiest parameter to select and compare between camera manufacturers or models and is thought to be representative of image quality, when it actually only determines image quality potential. What resolution technically determines is the maximum horizontal (H) and vertical (V) pixel count. When someone says they want a five-megapixel camera, they are saying they want a camera with a maximum resolution of 2560 (H) x 1920 (V). However, what they are implying is that this camera has very high image quality. Camera manufacturers configure the resolution in a similar way. While this variable is the easiest to select and configure, a horizontal and vertical resolution is only one of several important configurable quality parameters on the camera.
H.264 Bit Rate = “Actual” Image Quality
Most camera manufacturers either require or allow their cameras to configure a limit on the bandwidth used per second by the camera. Sony, for example, uses the bit rate parameter to determine the image quality and bandwidth used by their cameras. This creates a tradeoff between bandwidth and image quality. The bit rate range is selectable from 64 Kbps to 8192 Kbps for a two to three megapixel camera, and Sony recommends setting a higher bit rate to achieve a higher image quality.
Axis, on the other hand, provides another configurable parameter on its cameras called compression, which allows you to select how much to compress the image. Axis sets this value to 30 as a default, and it can be configured within a range of 0 to 100. The image quality impact of this parameter is exactly the same as Sony’s bit rate parameter. This can sometimes confuse users who switch between cameras because with Sony the higher the “bit rate” the better the image quality, and with Axis the lower the “compression” the better the image quality.
Bit rate type is one of the most misunderstood bandwidth and image quality camera configuration parameters in the industry. The types available are variable or constant bit rate – or VBR and CBR. CBR allows you to fix the bit rate stream, regardless of scene activity, complexity and resolution. The caveats being that if you set the cap too high, you can waste bandwidth; too low, and it could cause issues with increased image compression or a drop in frame rate. Variable means that network bandwidth fluctuates based upon what is happening within the camera’s field of view and changing scene complexity. However, competing manufacturers implement different variables that have confusing or opposite effects on image quality.
Axis, VIVOTEK and other camera manufacturers use VBR to allow higher quality images regardless of the amount of bandwidth used. For users concerned with increased bandwidth usage, Axis provides an option to set a cap on the video stream using CBR. If the output rate hits its cap, the user can configure the camera to prioritize frames per second, image quality or neither. Sony uses a term called adaptive rate control, which automatically lowers image quality or frames per second based upon available network bandwidth to create an “optimized” viewing experience. However, even though bandwidth will fluctuate due to changing scene complexity, in some cases a VBR camera will produce lower bandwidth than a CBR camera if the fixed bit rate is higher than the actual bit rate caused by the scene being viewed. Testing is critical to determining end-results.
Read the Fine Print
What can be done to avoid this confusion? Read the fine print and use the bandwidth calculation tools offered by many manufacturers. Systems are engineered and sold to accommodate high bit rates, but the cameras are only streaming low bit rates. Additionally, if you adjust I-frame frequency, or use network Quality of Service, image quality can be further impacted. So, when balancing all the bits on the network, don’t forget that the surveillance mission requires specific “image quality.” Don’t limit surveillance capabilities, because the network is running faster than you think.