Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from A-E

Color Spaces, Color Encodings, and Color Image Encodings - Color Spaces, Color space encodings, Color image encodings, Digital color image workflow, Raw sensor response coordinates

output device referred rgb

SABINESSTRUNK, PH.D.
Ecole Polytechnique Fédérale de Lausanne

To clearly describe and communicate color information, the color imaging community has defined several color spaces, color encodings, and color image encodings. The following review of the definitions is based on ISO 22028 [ISO04] intended for the imaging community, and might differ from the terminology used in some articles or books. In other words, a color encoding or color image encoding is often simply referred to as a color space.

Color Spaces

According to the CIE [CIE87], a color space is a “geometric representation of colors in space, usually of three dimensions.” A color space can be broadly categorized into three types: colorimetric, color appearance, and device-dependent.

For colorimetric color spaces, the relationship between the color space and CIE colorimetry is clearly defined. Besides CIEXYZ, CIELAB, and CIELUV, additive RGB color spaces also fall into this category. They are defined by a set of additive RGB primaries, a color space white-point and a color component transfer function. The additive RGB sensors are a linear combination of the XYZ color matching functions (see Figure 35):

where M is a (3 × 3) non-singular matrix mapping XYZ values to linear RGB values.

The RGB primaries associated with these sensors are the XYZ tristimulus values that correspond to pure red, green, and blue:A color space white-point is the color stimulus to which the values are normalized, usually a CIE daylight illuminant such as D 50 or D 65. For example, a color space with a white point of D 65( X ? D 65 , Y ? D 65 , Z ? D 65 ) ensures that all achromatic colors, i.e., all scalings of ( X ? D 65 , Y ? D 65 , Z ? D 65 ), are mapped to equal code values.

For the tristimulus values of the white-point itself:

For example, the XY Z to sRGB [IEC99, ANS01, IEC03] transform is as follows:

When substituting Equation 4 for M in equation 3, the result for X ? D 65 , Y ? D 65 , Z ? D 65 is equal to 0.9505, 1.0000, 1.0890 which corresponds indeed to the tristimulus values of CIE illuminant D 65, normalized to Y = 1. A color component transfer function is a function that accounts for the non-linear response to luminance of the human visual system or to the non-linearity of a device. The function used to model device non-linearities is usually called a gamma (?) function. In case of CRT monitors, this gamma function approximates a power function. Color component transfer functions are thus usually modeled with a logarithmic or power function. 1 A simple gamma function could take the form of:

1 Note that using these definitions, CIEXYZ calculated under illuminant D 65 is a different color space compared form of to CIEXYZ under illuminant A. Similarly, CIEXYZ under D 65 and CIEXYZ under D 65 with a logarithmic transfer function are also different.

To continue with the example of sRGB, the non-linear transfer function is a power function with an exponent of 1/2.4 ( C = R,G,B and C’ = R’,G’,B’):

Luma-chroma color spaces derived from additive RGB spaces are also considered to be colorimetric color spaces. These spaces linearly transform the non-linear RGB values to more de-correlated, opponent representations. Generically, they are often referred to as YCC or YCbCr spaces. The resulting luminance and chrominance values are only loosely related to “true” perceptual luminance and chroma. They depend on the additive primaries, the color component transfer function, and the transform. Such spaces are generally the bases for color image encodings used in compression [RJ02].

The transform from non-linear sRGB ( R’,G’,B’ ) values to sYCC ( Y,C b ,C r ) is as follows:

Color appearance color spaces are the output of color appearance models, such as CIECAM97s [CIE98] and CIECAM02 [MFH + 02]. They are generally based on CIE colorimetry and include parameters and non-linear transforms related to the stimulus surround and viewing environment. The color appearance color space values describe perceptual attributes such as hue, lightness, brightness, colorfulness, chroma, and saturation [Fai98].

Device-dependent color spaces do not have a direct relationship to CIE colorimetry, but are defined by the characteristics of an input or output device. For input-device-dependent color spaces, the spectral characteristics and color component transfer function of an actual or idealized input device are required, as well as a white-point. For output-device-dependent color spaces, such as CMYK, the relationship between the control signals of a reference output device and the corresponding output image is specified either using output spectra, output colorimetry, or output density.

Color image encoding describes the original’s colorimetry. In most workflows, however, the image is directly transformed from raw device coordinates into an output-referred image encoding, which describes the color coordinates of some real or virtual output. If the rendered image encoding describes a virtual output, then additional transforms are necessary to convert the image into output device coordinates, which is an output device specific color image encoding (see Figure 36).

Color space encodings

A color encoding is always based on a specific color space, but additionally includes a digital encoding method. Integer digital encodings linearly specify the digital code value range associated with the color space range. The color space range defines the maximum and minimum digital values that are represented in the digital encoding. Most RGB color space ranges will typically be defined as [0, 1], while CIELAB may range from [0, 100] for L * and [-150, 150] for a * and b *, respectively.

This usually means that all values smaller than the minimum value (i.e. 0, -150) and larger than the maximum value (i.e., 0, 100, 150) are clipped to the minimum and maximum values, respectively.

The digital code value range defines the minimum and maximum integer digital code values corresponding to the minimum and maximum color space values. For example, an 8-bit-per-channel encoding for an RGB color space with range [0, 1] will associate the digital code value 0 to the color space value 0, and 255 to 1, respectively. Consequently, by varying the digital code value range and/or color space range, one can derive a family of color space encodings based on a single color space. sRGB [IEC99, ANS01, IEC03] and ROMM/RIMM RGB [ANS02b, ANS02a] are two such examples.

For example, the non-linear sRGB values ( C’ = R’,G’,B’ ) are quantized to 8-bit/channel ( C? = R? 8-bit , G? 8-bit , B? 8-bit ) as follows:

The non-linear sYCC values are quantized to 8-bit as follows:

Using the color space primaries and the color space range, the color encoding of a gamut can be visually represented in x, y or the more perceptually uniform u’,v’ chromaticity diagrams. Figure 36 illustrates the color-encoding gamuts of the sRGB and ROMM RGB primaries with a color space range of [0, 1] in x, y and u’, v’ coordinates. Both are additive RGB color spaces based on a linear transformation of the CIE 1931 CMF’s. The encoding gamut of sRGB is smaller than that of ROMM RGB, i.e., more visible colors can be encoded in ROMM RGB than in sRGB. The sRGB sensors were optimized to encompass a CRT monitor gamut for an encoding range of [0, 1], while ROMM RGB was intended to cover the gamut of most printing colors. Note that the ROMM RGB gamut goes beyond the spectral locus, i.e., it encompasses chromaticity values that are not visible to the human eye. Thus, digital code values associated with the maximum color range values do not have perceptual meaning, and so are not “used” in the encoding of image data based on visible radiation.

Color image encodings

Color image encodings are based on a specific color space encoding, but additionally define the parameters necessary to properly interpret the color values, such as the image state and the reference’s viewing environment. Image state refers to the color rendering of the encoded image. Scene-referred color encodings are representations of the estimated color space coordinates of the elements of the original scene. Output-referred color encodings are representations of the color space coordinates of image data that is rendered for a specific real or virtual output device and viewing conditions.

Reference viewing conditions need to be associated with these image states so that the color appearance can be interpreted. Generally, image surround, adapted white-point, luminance of adapting field, and viewing flare will all be specified. In case of output-referred color encodings, a reference imaging medium—either a real or idealized monitor or print—also needs to be characterized by its medium white-point, medium black-point, and target gamut.

Note that in theory, a color image encoding could be based on any color space encoding. In practice, color space encodings are usually optimized for a given image state by defining an application-specific digital code value range and color space range. For example, the sRGB color image encoding for output-referred image representation is optimized for typical CRT monitor gamut and has a limited dynamic range. It is thus unsuitable for most scene-referred image data.

Digital color image workflow

The color flow of a digital image can be generalized as follows [ISO04]. An image is captured into a sensor or source device space, which is device- and image-specific, and contains raw device coordinates. It may then be transformed into an input-referred color image encoding, (a standard).

Standard RGB color image encodings will always describe either an input-referred or output-referred image state; most existing standard RGB color image encodings fall into the category of output-referred encodings. Source and output encodings are always device-specific.

Raw sensor response coordinates

When a scene or original is captured, either by a scanner or by a digital camera, its first color space representation is device- and scene-specific, defined by illumination, sensor, and filters. In the case of scanners, the illumination should be constant for each image. With digital cameras, the illumination can vary from scene to scene, and even within a scene. A source-specific RGB is not a CIE-based color encoding, but a spectral encoding defined by the spectral sensitivities of the camera or scanner.

When images are archived or communicated in raw device coordinates, camera or scanner characterization data—such as device spectral sensitivities, illumination, and linearization data—have to be maintained so that further color and image processing is possible. Ideally, the image should be saved in a standard file format, such as TIFF/EP, which has defined tags for the necessary information.

It is highly unlikely that there will ever be a “standard” source RGB encoding. With digital cameras, the illumination is scene-dependent. With scanners, manufacturers would have to agree on using the same light source, sensors, and filters—components that are typically selected on the basis of engineering considerations.

Input-referred image state

The transformation from raw device coordinates to input-referred, device-independent representation is image- and/or device-specific, including linearization, pixel reconstruction (if necessary), white point selection, and followed by a matrix conversion (Figure 37). If the white point of a scene is not known, as is often the case in digital photography, it has to be estimated. The purpose of an input-referred image color space is to represent an estimate of the scene’s or the original’s colorimetry. An input-referred space maintains the relative dynamic range and gamut of the scene or original.

Input-referred images will need to go through additional transforms to make them viewable or printable. Appearance modeling can be applied when an equivalent or corresponding reproduction is desired, and the output medium supports the dynamic range and gamut of the original. In most applications, the goal is to create a preferred reproduction, meaning that the image is optimized to look good on a specific medium with a different dynamic range and gamut than the original. In that case, a digital photography reproduction model is applied. Input-referred image spaces can be used for archiving images when it is important that the original colorimetry is preserved so that a facsimile can be created at a later date.

The advantage of input-referred images, especially if the images are encoded in higher bit-depth, is that they can always be tone- and color-processed for all kinds of different rendering intents and output devices at a later date. The quality of the colorimetric estimate depends on the ability to choose the correct scene-adopted white-point and the correct transformations. CIE XYZ, Photo YCC, and CIELAB are all examples of color spaces that describe an estimate of the scene’s or original’s colorimetry and can be used to define an input-referred color image encoding. Two standard, input-referred color image encodings are ISO RGB and RIMM RGB.

Output-referred image state

Output-referred image encodings describe the image colorimetry on a real or virtual output device. Images can be transformed into output-referred encodings from either raw device coordinates or input-referred image encodings. The complexity of these transforms varies. They can range from a simple video-based approach using a linear transform and a simple contrast (“gamma”) adjustment to complex, image-dependent algorithms. The transforms are usually non-reversible, as some information of the original scene encoding is discarded or compressed to fit the dynamic range and gamut of the output. The transforms are image-specific, especially if pictorial reproduction modeling is applied. The rendering intent of the image has therefore been chosen, and may not be easily reversed. For example, an image that has been pictorially rendered for preferred reproduction cannot be re-transformed into a colorimetric reproduction of the original without knowledge of the rendering transform used.

Output-referred image encodings are usually designed to closely resemble some output device characteristics, which insures that there will be a minimal loss when converting to the output-specific space. Most commercial image applications only support 24-bit image encoding, making it difficult to make major tone and color corrections at that stage without incurring visual image artifacts. Some rendered RGB color-image encodings are designed so that no additional transform is necessary to view the images; in effect, the output-referred RGB color space is the same as the monitor output space. For example, sRGB is an output-referred image encoding that describes a real output device (CRT monitor) and as such is equivalent to a device encoding.

sRGB is currently the most common output-referred encoding. All consumer cameras output sRGB-encoded files, and many professional cameras allow that option also. Another popular output-referred encoding is Adobe RGB, which allows encoding of more colors than the rather limited gamut of sRGB. ProPhoto RGB (ROMM RGB) is a standard output-referred image encoding optimized for print reproduction.

Output device coordinates

Transforms from output-referred RGB color image encodings to output device coordinates are device- and media-specific. If an output-referred space is equal or close enough to real device characteristics, such as “monitor” RGBs, no additional transformation to device-specific digital values is needed. In many cases, however, there is a need for additional conversions. For most applications, this can be accomplished using the current ICC color management workflow. An “input” profile maps the reproduction description in the output-referred encoding to the profile connection space (PCS), and the “output” profile maps from the PCS to the device- and media-specific values.

Aside from graphic arts applications, images today are rarely archived and communicated using output-device coordinates, such as device- and media-specific RGB, CMY, or CMYK spaces. However, many legacy files, such as CMYK separations and RGB monitor-specific images, need to be color managed so that they can be viewed and printed on other devices.

 

Color Theory - Introduction, Physiology, CIE Color Spaces [next] [back] Color Photography - Early Years, Additive Color Systems, Subtractive Color Systems

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or