Gamut errors: Who cares?

Bob Pank#

Author: Bob Pank#

Published 1st June 2012


Gamut errors are the most common video signal problems. This is because colour television or cinematography depends on being able to represent every pixel on the screen in terms of red, green and blue. We want to deliver perfect RGB signals representing all the possible colours in our pictures. Virtually all display technologies use RGB primary colours to match the RGB colour sensitivities of the human eye. A gamut error occurs if we process video from a non-RGB colour space which overloads or creates negative value in one of the RGB channels.
If the signals sent along the chain from camera to screen were always in RGB there would theoretically be little room for any gamut errors. Studio RGB levels leave space top and bottom to allow for transient overshoot. Black is level 16. 235 (in 8 bit digital) is the maximum in red, green or blue.
If bright red goes through the system at 255, that is allowed but typically the screen would clip to normal saturated red at 235. Worse, it is possible to have negative values if R, G or B is below 16 (which does not represent any real colour except black. In RGB connected paths, there is only a small range of 14 or 15 levels either end of each colour signal that are illegal, causing clipping of the image viewable range.
Historically, it was realised it would be more economical to transmit colour images as colour difference components: YUV (or 'YPrPb'). The most important part of a component video signal is the luminance (Y), made by adding each of the RGB signals in the right proportions. This then has the full image detail in black and white – but no colour.
The colour difference signals U and V are made by subtracting the blue and red respectively from Y. The colour information is thus distilled into an apparently simple format almost like semaphore when you look at them on a vectorscope: the angle between the U and V signals now points to the pixel colours on a circular scale.
In the component signal world, we pass the YUV presentation instead of RGB because it is more efficient. This extends to MPEG2 and H264 (MP4) and many other formats. RGB only gets reconstituted at the display screen. The big problem with component formats is the range of the colour difference signals. U and V are allowed to go positive or negative but the allowable range depend on how much luminance is for any given pixel. Any processing step like upconversion, contrast enhancement or a colour balance operation introduces a possibility of the balance in the YUV proportions being wrong. They can be really very wrong if, for instance, the luminance is low to black and yet colour signals are at saturated levels. This could produce very negative RGB values that mean nothing to human eyes. These types of gamut error are not necessarily picked up by viewing a standard vectorscope. Other measurement presentations or a gamut error detector that shows the offending areas for any RGB or YUV excursions as in the Real-Check SoloQC are necessary in a QC environment if you don’t want your work rejected.
Robin Palmer is Managing Director of Cel-Soft and is currently involved with software solutions for 3D & TV quality control and measurement technology.

Related Listings

Related Articles

Related News

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.