Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from F-J

Image Formation - Introduction, Illuminants and light, Radiometry, Photometry, Illuminants, Reflectance, Sensors, Sensors of the Human Visual System

color radiation power radiant

SABINESSTRUNK, Ph.D.
Ecole Polytechnique Fédérale de Lausanne

Introduction

Image formation is fundamental to a sensor’s response to radiation. In most applications, we are concerned only with the part of the radiation spectrum (approximately 380 to 700 nanometers) that the human visual system is sensitive to. Equally, the sensors considered are sensitive only to such radiation.

Illuminants and light

Light is the part of the electromagnetic radiation that causes the light-sensitive elements (rods and cones) in the eyes to respond. The range (spectrum) of this “visible” radiation is usually described in wavelengths(?) and uses the units of nanometers. For normal human vision, the sensitivity ranges from approximately 380 to 700nm.

According to Quantum Theory [WS82], the energy of electromagnetic radiation is given by

where Q is the photon energy in Joules J or eV (1 eV = 1.6 × 10 -19 J), h is Planck’s constant (h = 6.623 × 10 -34 J s), v is the frequency (s -1 ), c is equal to the speed of light in vacuum (2.998 × 10 8 ms -1 ), and ? is the wavelength ( m ).

Radiant power F (also called flux F ) is radiant energy emitted, transferred, or received in unit time intervals (e.g., dQ/dt .) The unit for F is W (Watt = Js -1 ). By definition, ? = cv -1 , and d? = -cv -2 dv . Over the spectral region d ?, the radiant power can thus be expressed as F ? d ? = – F v dv .

The minus sign allows for a decrease in frequency with an increase in wavelength. It follows that:

The number of photons (or quanta) that are emitted in unit time per unit frequency interval is calculated by:

Properties of electromagnetic radiation interacting with matter are synthesized in a set of rules called the radiation laws. These laws apply when the matter is a blackbody. Every object emits energy from its surface when heated to a temperature greater than 0° Kelvin. A blackbody (Planckian) radiator is an “ideal” solid object that can absorb and emit electromagnetic radiation in all parts of the electromagnetic spectrum so that all incident radiation is completely absorbed.

Emission is possible in all wavelengths and directions. 1 The spectral energy distributions are continuous functions of wavelength. The exact radiation emitted from an object is dependent on its absolute temperature.

The three radiation laws most relevant to imaging are Planck’s Formula, Wien’s Displacement Law, and Stefan-Boltzmann’s Law. Planck’s Formula states that the spectral radiant excitance 2 ( M ?) of a blackbody at temperature T per unit wavelength interval is given by:

Illuminants and light

The SI unit for spectral radiant excitance is Wm -3 , but a commonly used unit is Wm -2 nm -1 .

Wien’s Displacement Law states that the wavelength of the maximum emitted radiation is inversely proportional to the temperature:

where ? max is the wavelength of the peak emission (in meters) and T is the temperature in Kelvin. Stefan-Boltzmann’s Law states that as the temperature increases, the amount of radiation at each wavelength increases. The (total) radiant excitance ( M ) is proportional to T 4 :

Illuminants and light
a is the Stefan-Boltzmann constant and T the temperature in Kelvin.

Radiometry

The radiometric quantities and units are energy, power, radiant excitance, irradiance, radiant intensity, and radiance. The latter four incorporate power and geometric quantities: Irradiance (flux density) is power per unit area dF/dA surface incident from all directions in a hemisphere onto a surface. It is related to radiant excitance, and the unit for irradiance is also Wm -2 . The difference is the unit area: radiant excitance refers to the area of a source, and irradiance to the area of a receiver. The symbol for irradiance is E .

Radiant intensity is power per unit solid angle ( dF/d ?) and measured in Wsr -1 . sr is the abbreviation for steradian, which is the solid angle that, having its vertex in the center of a sphere, cuts off an area on the surface of the sphere equal to that of a square with sides of length equal to the radius of the sphere [NIS95]. By dividing the surface area of a sphere by the square of its radius, we find that there are 4p steradians of solid angle in a sphere. The term steradian is related to the radian, the plane angle between two radii of a circle that cuts off on the circumference an arc equal in length to the radius Radiance is power per unit surface area per unit solid angle: dF/d?dA surface cos(f). f is the angle between the surface normal and the specified direction. The symbol for radiance is L .

In imaging, “light” is often characterized by its spectral power distribution E (?). An illuminant SPD denotes the radiant “power” given at each wavelength per wavelength interval of the visible spectrum. Note that the spectral power distribution, using the symbol E (?), should correctly be named spectral irradiance distribution (or radiant excitance distribution, depending on the context). However, in most literature E (?) is called spectral power distribution or SPD. The measurements are usually given in Wm -2 nm -1 , or they are relative. This comes from the fact that when we are modeling human vision or an imaging system, we are often more interested in relative than absolute responses. Consequently, it suffices to use relative spectral power distributions that are normalized (either to 1, or to 100 at 555 or 560nm [WS82]).

Photometry

Radiometry is the measurement of electromagnetic radiation. Photometry is the measurement of light. Photometric quantities are the same as radiometric quantities, except that they are weighted by the spectral response of the eye.

Luminous power (or luminous flux) is calculated from radiant power (or flux) as follows:

where F L is the luminous power (units: lumens), F R (?) is the spectral radiant power per wavelength (Wnm -1 ) and V(?) is the luminosity function of the human eye. The luminosity function V(?) is a measure of the sensitivity of the human visual system to radiant power, which differs across the visible spectrum. Figure 20 illustrates V(?), as defined by the CIE [WS82].

The radiometric units and their corresponding photometric units are listed in Table 1. Note that like radiance and luminance, irradiance and illuminance have the same symbol.

Common radiometric and photometric terms and units
The following comments are helpful to understand photometric quantities in imaging:

  • A point light source with an intensity of 1 cd emits a power of 1 lumen per steradian lm/ sr in all directions
  • A point light source with an intensity of 1 cd will produce an illuminance of 1 lux at a distance of 1 m (see Figure 21).
  • Inverse Square Law: The illuminance E decreases with the distance squared, E = I/d 2 , d = distance (m), d > 5 × source diameter.
  • The luminance L does not decrease with (viewing) distance.
  • Cosine Law: The illuminance E falling on any surface varies with the cosine of the incident angle f, E f = E × cos(f)

Illuminants

The Commission Internationale de L’Eclairage (CIE) [CIE86] specifies relative spectral power distributions of typical phases of daylight illuminants. These illuminant SPDs are calculated according to a method proposed by Judd et al. [JMW64], based on the mean and first two principal components of a series of daylight measurements.

cosine law: E = E f cos (f)

It is common usage to call these illuminants by the letter “D” for daylight, and the first two numbers of the corresponding correlated color temperature. The term correlated color temperature is defined as the temperature of a blackbody radiator whose perceived color most closely resembles that of the given selective radiator at the same brightness and under specified viewing conditions [WS82]. For example, D55 is an illuminant calculated according to the method proposed in [JMW64], respectively [CIE86], with a correlated color temperature of 5,500 Kelvin. Figure 23 illustrates different daylight illuminants, ranging from D45 to D75.

For the purpose of colorimetry, the International Organization for Standardization (ISO) and the CIE have standardized two specific illuminants, D65 and A [ISO98]. D65 is a daylight illuminant with a correlated color temperature of 6,500 K. A is a tungsten-filament illuminant whose relative SPD is that of a Planckian radiator at a temperature of 2,856 K.

Reflectance

The color of a (non-transparent) object is characterized by its surface reflectance S(?), (S (?) ? [0, 1]). For each wavelength or wavelength interval, a reflectance factor indicates how much of the incoming radiation is reflected. S (?) = 1 means that all incoming radiation is reflected, while S (?) = 0 indicates that all incoming radiation is absorbed. Figure 24 illustrates three reflectances of the Macbeth color checker rendition chart [MMD76].

Sensors

A sensor is an entity that reacts to light. The term is also often used to denote the sensitivity function R (?) that indicates the sensor’s responsiveness to radiation at a given wavelength per wavelength interval.

In general, sensors are considered to be physical entities, such as a CCD or a CMOS photo site, photographic film, or the cones and rods in the human retina, which physically exist and give a positive response when radiation is detected. In imaging, however, we often assume that sensors do not need to be physical and also allow for sensors that have negative sensitivities. From a conceptual point of view, these sensors are the result of some “processing,” either by the human visual system or an imaging system (see opponent color mechanisms).

Sensors of the Human Visual System

For the purpose of modeling the human visual system (HVS), several different sensor sensitivities can be considered, derived from physiological and/or psychophysical properties of the human visual system. We limit the discussion here to the “physical” sensors of the HVS, namely the photoreceptors. These light-sensitive elements are the called the rods and cones. They contain light-sensitive photo-chemicals, converting light quanta into an electrical potential that is transmitted to the brain in a chain of neural interactions. They are located in the retina, the curved surface at the back of the eye (see Figure 25). The names are derived from their typical shape.

Rods and cones are not uniformly distributed in the retina. Cones are primarily concentrated in the fovea, the area of the retina at the end of the optical axis that covers approximately 2° of visual angle. Beyond 10°, there are almost no cones. Rods, on the other hand, are primarily located outside of the fovea.

There are three types of cones, called L (long), M (middle), and S (short) for their relative spectral positions of their peak sensitivities (see Figure 26). There is only one type of rod. The activities of the rods and cones are driven by the general luminance level of a stimulus. Photopic vision (Luminance levels >10cd/m 2 ) refers to visual sensations when only the cones are active; scotopic vision (Luminance levels <0.01 cd/m 2 ) when only the rods are active. Mesopic vision refers to the luminance range where both cones and rods are active. Consequently, the ability to differentiate color signals is possible only in photopic or mesopic vision conditions.

Image sensors

The physical image sensors currently on the market are either CCDs (charge-coupled devices) or CMOS (complimentary metal oxide semiconductors). The photosensitive material is silicon, a semiconductor that can relatively efficiently convert photon energy into an electrical current. The photo sites (often called pixels) are arranged either in rows (linear arrays) or in areas (area arrays).

The pixels of a CCD/CMOS capture electrons in proportion to the sensor plane (also called focal plane) exposure. Exposure H is defined as:

where E is the sensor plane illuminance in lux and t is time in s.

The image properties of a sensor (CCD/CMOS) depend on:

  • Number of photo sites
  • Photo site size
  • Photo site capacity
  • Noise
  • Quantization gain
  • Filters
  • Filter arrangement

The photo site capacity (also called well or charge capacity), which is related to the photo site size, determines the saturation level. The charge capacity is the maximum charge level where the sensor response is still linear (approximately 80 percent of saturation). The more photons a pixel can absorb, the higher the dynamic range (and bit-depth) of the sensor. The dynamic range is the ratio of charge capacity to read noise, which varies from sensor to sensor (approximately 5-10 electrons for consumer quality camera CCDs).

Silicon is able to absorb photon (quanta) energy into electrical current. The quantum efficiency of a sensor can be measured
for the visible spectrum, with or without the color filters. The quantum efficiency indicates the percentage of quanta that is converted to electrons (see Figure 27). For a real physical sensor, such as a CCD or CMOS, R (?) corresponds to the quantum efficiency function of the sensor combined with the color filters.

Physical image formation

In the case of imaging an object at spatial position x , the spectral irradiance falling on the sensor is proportional to the product of the reflectance S (?) and the illuminant SPD E (?) at that spatial position:

C (?) is called the color signal or color stimulus. 3 We ignore here surface characteristics, lighting, and viewing geometry, and assume that we can neglect all these factors that influence the color signal by using a relative SPD E (?) instead of physical irradiance measures. In many color imaging applications, this approximation is sufficient to model image formation. 2

3 Note that in case of imaging “light,” the reflectance factor S (?) = 1 and C(?) ? E (?).

Thus, we can simplify Eq. 9 and write:

The color response ? k of a sensor k with sensitivity R k (?) at spatial position x can therefore be expressed as:

where vs indicates the visible spectrum. For many color imaging algorithms, R k (?), C(x,?), E(x ,?) and S(x ,?) can be adequately represented by samples taken at ?? = 10 nm intervals over the spectral range of 400 to 700nm [SSS92]. The integral in Eq. (11) can thus be replaced by summation:

where n is a normalization factor, which usually either normalizes the response ? k (x) depending on the number of samples used (e.g. n = 1/31), or ensures that max(? k (x )) = 1. We assume that such a normalization factor is used, and only note it explicitly when necessary.

Using algebraic notations, color signal C(x , ?), reflectance S(x ,?), illumination E(x ,?), and sensor sensitivity R k (?) can thus be expressed as 31 × 1 vectors c X , s X , e X , and r X , respectively.

where T is the transpose and diag is an operator that turns e X into a diagonal matrix:

Any physical sensor can have a number of filters (or channels) with different sensitivities R k (?). For the human visual system and trichromatic imaging systems, k = 1, 2, 3. Thus, the total color response at position x of a system with three sensors is a vector with three entries: ?( x ) = [? 1 ( x ), ? 2 ( x ), ? 3 (x)] T . In general, the letters R, G, B are used for color responses with sensors that have their peak sensitivities in the red (long), green (medium), or blue (short) wavelength part of the visible spectrum, respectively, X, Y, Z when the sensors correspond to the CIE color matching functions, and L, M, S when the sensors correspond to cone fundamentals.

The physical image formation model of Equations 9 through 14 is a simplified model and it does not take into account any physical illuminant, surface, or sensor properties. In fact, it is applicable only to a “flat world” with no shadows, a single source illuminant, no surface reflectance interactions, and Lambertian surfaces that reflect incoming light equally in all directions. 4

4 Such scene arrangements are usually called Mondrians, so named by Edwin Land to describe his experimental setup that resembled the paintings of Dutch artist Piet Mondrian.

 

Image Inpainting [next] [back] Image Data Representations - Binary (1-bit) images, Gray-level (8-bit) images

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or