Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from P-T

Photographic Optics - Photographic Imaging, Photographic Lenses, Photographic Filters

light image focal length

TERRANCE KESSLER
Laboratory for Laser Energetics, University of Rochester

Optics is the branch of physics that deals with the generation, propagation, and detection of light. Electromagnetic (EM) wave and quantum mechanical photon theories coexist to explain the phenomena of light. The wave theory of light is derived from EM fields and is generally sufficient for imaging and photography. Physical optics deploys EM theory directly whereas geometrical optics deploys the more intuitive concept of light rays to evaluate and illustrate the performance of an optical system.

A transverse electromagnetic (TEM) wave contains electric and magnetic vectors that are mutually perpendicular to the direction of light travel, or the propagation vector, as shown in Figure 50. The EM spectrum contains long wavelength (low frequency) radio waves to short wavelength (high frequency) cosmic rays. The visible part of the EM spectrum is generally
considered to extend from 400 to 700 nm, although there is some variation in this range among individuals depending on the brightness and viewing conditions. Although the human eye does respond to near-ultraviolet and near-infrared light, useful image formation is rare at these wavelengths. Classically, light travels as rays in straight lines (rectilinear propagation), at a constant velocity in vacuum of approximately 3 × 108m/second or 186,000miles/second. A reduced velocity is given in other media, depending on their density. The characteristic distance between two adjacent peaks of the amplitude is called the wavelength and is commonly measured in micrometers (µm), nanometers (nm), or Angstroms (Å). Light may also be characterized by its frequency. For example, light of wavelength 600 nm has a frequency of 5 × 10 14 cycles per second, or Hertz (Hz).

The phase of the waveform represents its position with respect to an arbitrary temporal or spatial reference and is usually expressed in terms of an angle between 0 and 2p radians. A wavefront is a two-dimensional surface of constant phase. A converging wavefront represents a focusing beam whereas a diverging wavefront represents an expanding beam. A field that oscillates in a particular plane is called plane polarized light. Non-polarized light oscillates in all possible directions while circularly polarized light contains an electric field that oscillates within a continuously rotating plane. The component of light with which most materials interact is the electric field. For square-law detectors such as photographic film and electronic sensors, light intensity is proportional to the square of the electric field amplitude, referred to as the electric field intensity.

Light energy is conserved during interaction with optical elements according to the equation,

A + R + T = 1

where A, R , and T are the coefficients of absorption, reflectance, and transmittance, respectively. Light incident at a semi-transparent surface is partially reflected at the surface, partially absorbed, and partially transmitted, undergoing refraction at this interface between two optical media. The transmitted ray in the second medium slows down if the medium is denser. The ratio of the speed of light in a vacuum to that in the medium is the refractive index of the medium. The refractive index varies with wavelength, causing chromatic aberration in optical systems. A birefringent material has a refractive index that is dependent on the polarization of light, while a non-linear material has an irradiance dependent refractive index.

Refraction takes place because the velocity of light varies according to the density of the media through which it passes. The angle of incidence is defined as the angle between a ray of light incident on a surface and the normal to the surface at that point. The angle of reflection is defined as the angle between a reflected ray of light as it leaves a surface and the normal to the surface at that point. The angle of refraction is defined as the angle between a deviated ray of light as it leaves a surface and the normal to the surface at that point. The relationship between the angle of incidence (? i ) angle of refraction (? r ) and the refractive indices n 1 and n 2 of the two media is given by the law of sines.

n (?) 1 sin (? i ) = n (?) 2 sin (? r )

Refraction is toward the normal to the surface when the rays enter a denser medium. Shorter wavelengths slow down more, thus refract a greater amount. This phenomenon is called refractive dispersion. When light travels from a denser material to a less dense material it may fully reflect back into the denser medium. This phenomenon is called total internal reflection and occurs when the incident angle reaches the critical angle given by ? i = sin -1 ( n 1 / n 2 ). For a glass-air interface the critical angle is typically greater than 40 degrees. Total internal reflection is used instead of coated surfaces in certain prisms as well as fiber-optic cables.

A glass window with plane parallel sides does not rotate an incident oblique ray, but the emergent ray is laterally displaced. When placed in a lens system, this can cause image aberrations or a focus shift. A prism is a block of glass with triangular cross-section. Due to material dispersion, a prism spectrally separates white light as shown in Figure 51.

Reflection (or refraction) occurring directionally over a beam of light is called specular reflection (or refraction). We commonly refer to this as mirror-like reflection of light from a smooth surface such as polished metal, glass, plastic, and fluids. However, the reflection (or refraction) of light in many directions is called diffuse reflection (or refraction). A Lambertian material produces a perfectly diffuse reflection which has the same luminance regardless of the angle from which it is viewed. Properly prepared surfaces of Teflon are very close to Lambertian, while matt white paper is close enough for calibration in the field.

Certain properties of light are explained only by considering light as a wave motion. When two coherent waves are equal in magnitude they can be made to interfere, and the resultant intensity may vary between zero for complete destructive interference, when the two waves are exactly 180 degrees out of phase, to a doubling of intensity by constructive interference when they are exactly in phase. Coherence exists between two waves when their relative phase is constant.

An uncoated piece of glass reflects a certain percentage of incident light depending on the change in refractive index between the glass and the surrounding medium. The surface reflectivity can be derived from the basic Fresnel equations. Near normal incidence, the reflectivity of uncoated glass is given by:

R = [( n s – 1)/( n s + 1)] 2

where s denotes substrate. For a glass with an index of 1.5, approximately 4 percent of the light is reflected at each interface.

For applications where this light loss is undesirable, an antireflection (AR) coating is deposited on the surface of the glass to prevent reflection (Figure 52). The optimum coating causes the reflections from the two interfaces to be equal in magnitude and opposite in sign. Destructive interference occurs in reflection and thus light fully transmits through the interface. The condition for equal magnitude is given by:

R = [( n sn c 2 )/( n s + n c 2 )] 2

The overall reflectivity R is zero when ns = n c 2 . For a glass of index 1.51, a suitable material is magnesium fluoride with index 1.38, near to the ideal value of 1.23. The condition for an opposite sign is met when the individual reflections are out of phase by 180 degrees or p radians. This corresponds to an optical path difference of ?/2 between the two reflected components. Since the component reflecting from the coating to glass interface propagates through the coating twice, the sign condition is given by:

2( n c t ) = ?/2

Hence, the optical thickness ( n c t ) is ?/4, and this layer is referred to as a quarter-wave coating. This AR coating is optimum for a narrow range of wavelengths. Additional layers can be used to increase the range of wavelengths for which the AR coating is useful. Modern coating equipment, techniques, and materials, plus fast digital computers allow extension of single coatings to stacks of separate coatings. By suitable choice of the number, order, thickness, and refractive indices of individual coatings, the spectral transmittance may be selectively enhanced to a value greater than 0.99 for a wide region of the visible spectrum.

Coatings with 20 to 30 layers are routinely deposited for high reflectivity applications such as mirrors, filters, polarizers, and beam splitters. A beam splitter divides incident light or radiation into two or more portions based on wavelength or polarization. The two examples shown below would add aberrations to a normal imaging system (Figure 53a, b). However, thin pellicle beam splitters have virtually no effect on image formation when the transmitted component is used for image recording.

Diffraction is the tendency for light to spread at the edge of an aperture rather than travel in a straight line. The branch of physics that deals with this phenomenon, and that can account for arbitrary perturbations to a plane wavefront, i.e., general aberrations, is called physical optics. A perfect, aberration-free lens still suffers from diffraction, which causes the image of an object point to be given as the Airy pattern of alternating dark and light rings from interference effects. The Airy disc is the diameter ( d ) of the first dark ring of the pattern given by:

d = 2.44? f/D = 2.44? F #

where f is the focal length, D in the effective aperture diameter, and F # is called the f -number of the lens. For light of wavelength (?) 500 nm, d is approximately 10 µm for an f/8 lens (Figure 54).

When the focusing or imaging performance of a lens is limited only by the effects of diffraction, the lens is said to be diffraction-limited. Diffraction-limited lenses of large aperture, corrected for use with monochromatic light, are used in microlithography for production of integrated circuits and laser experiments where ultra-high intensity beams are focused to micron size spots. The resolving power (RP) or resolution of a lens is directly related to the size of its focal spot. For imaging applications, the Rayleigh criterion is often used to define resolution. This criterion is met when the respective Airy patterns from two closely spaced point sources are separated such that the central peak of one is superimposed on the first dark ring (minimum) of the other. The expression for their separation distance is:

r = 1.22? f /D = 1.22? F #

To continue with the example above, a lens with an Airy disc diameter of 10 µm possesses a resolution of 1/ r = 1/5 µm or 200 line-pairs per millimeter.

Holography is by its very nature a high-resolution, diffraction-limited, imaging technique. A hologram captures complete wavefront information from the object recorded. Whereas photography records only the irradiance of a scene, holography records both wavefront and irradiance information. A three-dimensional scene is recorded on photographic material by interference between a reference light beam and one reflected or transmitted by the subject. Holograms are recorded on materials with resolving powers exceeding 10,000 line-pairs per millimeter. When viewed under the proper conditions, reconstructed images are presented to the left and right eyes, producing the appearance of three dimensions. Holograms can provide imagery where the viewer can change the viewing angle and see the subject from different perspectives thus providing significant parallax; sometimes revealing hidden information. Holograms are actually complex diffraction gratings that bend and focus light to form three-dimensional images. The grating equation

d [sin(? d ) – sin(? i )] = m ?

describes the relationship between the angle of diffraction (? d ), angle of incidence (? i ), wavelength ?, and groove spacing d. It is important to note that long wavelengths are diffracted more than shorter wavelengths, which is opposite the dispersion from a prism. Talbot imaging is the name given to the periodic self-imaging exhibited by grating arrays. It is a very useful concept in understanding the relationship between diffraction and image formation.

Photographic Imaging

The basic properties of a lens system can be determined using a geometrical construct involving rays of light. This is referred to as geometrical optics. The first order properties of an optical system can be calculated using small-angle approximation. Paraxial optics involves rays passing through the central zone of a lens at small enough angles of incidence so that the sine and tangent of an angle are taken as equal to the angle. Angles are measured in radians that are derived from a ratio of two distances having the same units. Typically, paraxial imaging involves angles less than 15 degrees and provides accurate estimates with errors of only a few percent.

A thin, positive lens causes parallel incident light to converge toward a plane of focus, called its focal plane, which is the image plane for objects at a distance approaching infinity. Although there are two refracting surfaces for a lens, there exists an apparent plane of refraction for a ray of light called a principal plane. The distance from this equivalent refracting plane to the focal plane is called the focal length (f). The lens power is defined as the reciprocal of the focal length and is measured in inverse meters or diopters. Power is useful because it is additive for a series of closely spaced thin lenses. The f-number of a lens is a ratio of its focal length to its diameter as measured at its principal plane. Since image illuminance is inversely proportional to the square of the f-number, it is used as a metric for exposure control. The effective f-number is obtained by dividing the image distance by the effective aperture diameter.

A modern photographic lens consists of a number of separated elements or groups of elements, the physical length of which can be greater than its focal length. This multielement, compound lens is often referred to as a thick lens. The term equivalent focal length ( f ) is often used to denote the composite focal length of such a system. For example, for two closely spaced thin lenses of focal lengths f 1 and f 2 , the equivalent focal length is given by

1/ f = 1/ f 1 + 1/ f 2 – d / f 1 f 2

where d is the axial separation between lenses. For a thick lens with more than two elements, the analytical equations are more cumbersome. Thin lens formulas can be used with thick lenses if conjugate distances are measured from the first and second principal planes. These are equivalent refraction planes that have unit transverse magnification as shown by the upper ray in Figure 55.

For thin lenses in air the principal planes are coincident. However, for thick lenses in air the principal planes are separated and coincident with the nodal planes. An oblique and non-refracted ray directed at a point on the first nodal plane appears to emerge on a parallel path from a point on the second nodal plane. A ray of light entering the first plane emerges from the second at the same height above the axis. Although the ray of light can actually change direction only at the lens surfaces, it appears to travel in a straight line toward the object nodal point and to leave in a straight line away from the image nodal point. The entering and departing rays will be parallel but displaced. When there is the same medium such as air in both the object and image spaces, then the nodal and principal planes coincide. Focal length and image conjugate distances are measured from the rear nodal point, and object distances are measured to the front nodal point. A useful property of a nodal point is that rotation of a lens about its vertical axis does not shift the image of a distant object.

The focal plane is the closest to a lens an image can be formed. As the object moves in from infinity, the image moves away from the lens and its focal plane. The relationship between the conjugate distances and lens focal length are given by the Gaussian and Newtonian equations as follows:

1/ u + 1/ v = 1/ f Gaussian

xy = f 2 Newtonian

A real image is a reconstitution of an object in which the divergent rays from each object point intersect in a corresponding conjugate point. The pattern of points formed constitutes a real image in space that can be made visible by a screen placed in this plane or be recorded by a square-law detector such as photographic film or an electronic sensor. Real images are not directly observed by the human eye. On the other hand, a virtual image can be viewed directly but cannot be recorded. An object inside the front principal focus will give a virtual image, as formed by a simple magnifier. Virtual images are usually upright, not reversed, and directly observed in microscopes and telescopes.

Image magnification is defined as the size of the image relative to the size of the object. For most photographic lenses, the image is much smaller than the subject and the magnification is less than unity. However, magnification can be greater than unity for applications in photomicrography Three types of magnification are useful to define: lateral or transverse (m), longitudinal (L), and angular (A), with the relationship AL = m. As defined, m = I/O = v/u, and L = m 2 . Transverse magnification (m) is most commonly used in photography; however, for afocal instruments such as telescopes, angular magnification is more often used.

The aperture stop is the physical diaphragm in an optical instrument that limits the area of the lens through which light can pass to the image plane. A pupil is the virtual image of the aperture stop. When viewing from the front of the lens the entrance pupil is seen, and from the rear the exit pupil is seen. The aperture stop may be within or outside the elements of the optical system. For example, a telecentric lens has an aperture stop located at the focal plane of the lens so that either the entrance or the exit pupil is located at infinity. The image field is then limited to the physical diameter of the lens. This arrangement is used when it is necessary that the image is formed by parallel rays rather than a divergent cone. This provides error reduction when using imaging to accurately measure object size. A field stop is the physical diaphragm in an optical system that determines the extent of the object that is captured in the image. The visual appearance of a darkening of an image toward the edges or periphery is called vignetting. Natural vignetting by a lens is due to the cos4 ? law of illumination. Mechanical vignetting can occur if a stop is incorrectly positioned between the elements of a thick lens.

A perfect lens cannot have an infinitely small focal spot since it is limited by diffraction. Its resolution increases for smaller f# and shorter wavelength. For all but the most expensive lenses, a point is not imaged as an Airy pattern but rather is seen as a blurred patch of light. Most photographic lenses are limited by aberrations that lower the lens performance below the diffraction limit. Even a perfectly manufactured lens has aberrations due to the variation in refraction over its spherical surfaces. Interestingly, diffraction limited imaging is not natural and must be highly controlled to achieve. The magnitude of an aberration usually increases with an increased angle of view causing a greater obliquity factor. A ray trace through a lens surface-by-surface depends on repeated applications of the laws of refraction. An expansion of sin(?) in a power series yields:

sin(?) = ? – ?3/3! + 05/5!

Keeping only the first term (?) yields the first order or paraxial approximation, where sin(?) = ? (radians). For instances when the ray angles cannot be assumed too small, a real-ray trace is carried out without approximation to obtain more accurate realization of image performance. However,paraxial ray tracing provides enough accuracy to calculate five monochromatic aberrations, called the Seidel aberrations, and two polychromatic aberrations.

The Seidel aberrations, caused by refraction at spherical surfaces, are spherical aberration, coma, astigmatism, curvature of field, and distortion. Since aberrations depend on curvatures, thicknesses, refractive index, spacing of elements, and stop position, it is possible to use multi-lens compensation. Aberrations in one element or group of elements are reduced by equal and opposite aberrations in a subsequent group.

Spherical aberration (SA) exists when the focal length of a lens is a function of the radial distance from the axis. Figure 56 shows parallel incident rays progressively brought to a focus closer to the lens as they are refracted by zones farther from the axis. A lens suffering from SA will be overall degraded and show a focus shift as it is stopped down. Coma is similar to a spherical aberration for off-axis object points. Its name is derived from the comet-like shape of the image point. An aplanatic lens is corrected for spherical aberration and coma.

Astigmatism is illustrated in Figure 57. The image of an off-axis point from the vertical axis contains two lines separated along the direction of propagation.

The horizontal line nearer to the lens is tangential to the image circle and is called the tangential (meridional) focus. The line further from the lens is radial to the optical axis and is called the sagittal (radial) focus. Similarly, astigmatism causes an off-axis point from the horizontal axis to focus to a vertically oriented tangential focus. The line-separation is a measure of the sign and magnitude of astigmatism, and the position of best focus, or circle of least confusion, lies between them. Such a lens is an astigmat, meaning it does not produce a point. An anamorphic lens system also produces two orthogonal line foci; however, this is an on-axis effect caused by cylindrical power along one of the axes. Anamorphic lenses are used to obtain different magnifications along two axes for the purpose of mapping a square image format to a rectangular format.

A lens that yields a distortion-free image is said to be orthoscopic. However, variations of the shape and proportion of the object, an aberration referred to as image curvilinear distortion, is caused by a two-dimensional variation in image magnification. Distortion does not affect the sharpness of an image, only its shape. When straight lines in the subject are rendered as curving inward (outward), it is called barrel (pincushion) distortion as shown in Figure 58.

Curvature of field, known as Petzval curvature, exists when off-axis object points are imaged to a curved surface. In practice, the focal plane and image planes are often curved and only approximate a plane close to the optical axis. The amount of Petzval curvature depends on the net power and refractive indices of the individual lens elements. Although high resolution can be maintained over the field by curving the recording plane, this is not a reasonable option with electronic sensors.

Geometric distortion is due to increasing obliquity through a lens and is not the result of lens design limitations (Figure 59). Due solely to viewing angle, the peripheral detail of an object with depth suffers increasing elongation that is oriented along the radial direction. The image elongation is proportional to the secant of the field angle. This effect is not present for a flat object located normal to the optical axis.

Similarly, perspective distortion is not the result of lens aberrations. For correct perspective, the viewing distance of a photograph is related to the focal length of the lens and the print magnification. Viewing the final print at a distance inconsistent with the actual recording conjugates causes this distortion.

Two polychromatic aberrations are also included in paraxial lens evaluation. Refractive dispersion causes the focal length of a simple lens to increase with wavelength. This is shown as a separation of focus along the axis for a positive lens and is called axial or longitudinal chromatic aberration.

Lateral or transverse chromatic aberration is an off-axis aberration, the effect of which increases with field angle. Correction is accomplished by a combination of suitable positive and negative lenses of different glasses. The term achromatic is applied to a lens corrected for longitudinal chromatic aberration to bring light rays of two colors (wavelengths) in the visible spectrum to the same focus. A lens optimized for three wavelengths is called apochromatic.

Whether a lens is diffraction-limited or exhibits some amount of uncorrected aberration, its resolving power (RP) is directly related to its ability to resolve fine detail. The Rayleigh criterion is often used as a basis for lens evaluation. According to his criterion, two adjacent point sources are just resolved if their Airy disc images overlap to the extent of their radii, hence

RP = 1/1.22? F #

given in units of line-pairs per millimeter (lpm). For example, an Airy disc of 10 µm corresponds to an RP of 200 lpm. Only the very best custom lenses ever achieve this level of performance.

A useful means to evaluate image performance is to determine the ability of an imaging system to resolve separation between closely spaced lines in the image. Resolving power is determined using a test target consisting of sets of alternating light and dark bars. The set of bars that can be distinguished in the image, with a specified contrast, is used to estimate the resolving power of the imaging system.

A more sophisticated means to evaluate image performance involves a frequency domain representation of the image. The Fourier transform is a mathematical operation whereby an image is resolved into a set of sinusoidal components referred to as spatial frequencies in units of cycles per millimeter. The image is then completely represented by coefficients indicating the magnitude and phase of each of these components. The modulation (m) of a pattern is defined by

m = (I max – I min )/(I max + I min )

The change in modulation is given by the modulation transfer factor M = mI/mO for each frequency separately, where I and O denote image and object, respectively. The modulation transfer function (MTF) is a curve of M plotted as a function of spatial frequency. MTF theory is applicable to every component in an incoherent imaging chain, and their individual responses can be cascaded together to give the overall response of the system. A single MTF curve does not fully describe lens performance. Many curves should be generated at different apertures, wavelengths, focus settings, conjugates, and orientations for a thorough evaluation. MTF curves can be interpreted to provide both image RP and sharpness. For example, depending on the shape of the MTF curve, lenses may be characterized as high image contrast but limited RP or moderate contrast but high RP.

Photographic Lenses

A typical lens design begins with a suitable configuration of elements based on over a century of documented experiences. A lens is designed using the variables of glass type, number of elements, element thickness and curvature, element separation, and position of the aperture stop. In addition, the modern lens designer is able to use aspheric surfaces, diffractive lenses, and gradient lenses to obtain superior lens performance with fewer optical elements.

Modern computers can perform rapid and extensive ray tracing to determine the imaging capability during the evolution of a lens design. The software packages for lens design have become very sophisticated since gigabyte class processors are commonplace, which contain libraries of glass types and useful starting designs. By repeated application of the refraction equation at each interface, and the transfer equation to reach the next interface, the passage of a ray of light can be traced through a lens. Computers are capable of rapidly tracing millions of rays, for many surfaces, and can automatically evaluate the MTF and point spread function after each trace. Modern lens design software uses interactive display to provide the designer with visual understanding of the layout and performance of the lens system in real time.

Familiarity with simple refractive lenses aids a lens designer in choosing the optimum starting point for a new design. The refractive lens types can be categorized as either positive or negative. The positive lens is thicker in the center and is referred to as a converging lens since it causes a wavefront from an object at infinity to focus. This lens is needed to produce an image that is recordable. The negative lens is thinner in the center and is referred to as a diverging lens since it causes the wavefront of an object at infinity to diverge. This lens produces a non-recordable virtual image. Imaging lenses may contain both positive and negative lenses; however, the overall power of the compound lens must be positive to form a real image.

The lens maker’s formula, for a thin lens in air, determines the focal length from the index of refraction (n) of the substrate material, and the radius of curvature of the two surfaces.

The basic positive (negative) lens has one surface convex (concave) while the other surface is flat. A biconvex (biconcave) lens is a single element with both of its faces curved outward (inward) from the center, so that the lens is thicker (thinner) at the center than at the edges. The meniscus lens is a single element with both spherical surfaces curved in the same direction such that one is concave and the other is convex. Depending on the relative radii of curvature, the net power of a lens will be either positive or negative. Similarly, spherical mirrors may be concave or convex and have different imaging properties than lenses. Convex mirrors have a negative power while concave mirrors focus light. A mirror is assigned a refractive index (n) equal to -1 and the lens maker’s formula becomes f = R/2 for reflective imaging.

The schematic in Figure 63 illustrates the relationships of some basic lens configurations including the (a) achromatic doublet, (b) triplet, © symmetrical, (d) Petzval, (e) Tessar, (f) zoom variator, (g) telephoto, (h) retrofocus, (i) Double Gauss, (j) rapid rectilinear, (k) fisheye, (l) double anastigmat, (m) Celor (process), (n) Plasmat, and (o) quasi-symmetrical lens. The determination of the focal length for these lens configurations involves equations that are more complex than the lens maker’s formula; however, this is easily accomplished with commercially available lens design software.

The doublet at the top of the chart can be used as a simple magnifier that allows close and detailed inspection of a subject or image. With an object at the front focal plane, the rays of light leave the magnifier, enter the eye traveling parallel, and an enlarged and erect virtual image is seen with normal eye accommodation. The power is commonly given in terms of the size of the image viewed through the magnifier from a distance of 250 mm, which is taken as the average reading distance. The magnification is determined by dividing this distance by the focal length of the magnifier in millimeters. For example, a lens with a 25 mm focal length has a 10x magnification.

The early doublet had evolved along several branches to meet the requirements for a wide range of applications in photographic imaging. A symmetrical lens configuration uses two similar sets of elements arranged on either side of the aperture stop. The aberrations of one group can be almost canceled out by the equal and opposite aberrations of the other, provided the magnification is close to unity. Symmetry is necessarily broken for lenses operating at other magnifications.

Since image size is proportional to focal length, an increase in focal length gives a larger image. Depending on the focal length and maximum aperture required, lens configurations such as the achromatic doublet, Petzval, symmetrical, and Double Gauss lenses have all been used to increase image size.

The telephoto is a compact configuration to achieve long focal length. A negative component is placed behind a front positive component of long focal length, which displaces the principal planes in front of the lens. A teleconverter is a simple optical attachment that is placed between the camera lens and body to increase the focal length. It is a group of elements of net negative power and operates on the principle of the telephoto lens.

The retrofocus lens, or reverse telephoto lens, uses the telephoto concept in reverse to increase the field of view. This is the basic concept behind many wide angle lenses having significantly greater covering power than a normal lens of equal focal length, thereby permitting the use of shorter focal length lenses to obtain a larger angle of view. The fisheye lens contains a very large angle of coverage where orthoscopic image formation can be maintained for field angles approaching 120 degrees. If barrel distortion is permissible, then the angle can be increased beyond 180 degrees.

The zoom lens has a focal length that can be varied continuously between fixed limits while the image stays in relatively sharp focus by means of compensation. Optical compensation is where the moving elements are coupled together, while mechanical compensation has two or more groups moving at different rates and in different directions. The latter generally provides better compensation, but high mechanical precision is needed for the control linkages. The zoom lens is arranged to keep the f-number constant as its focal length is altered by controlling the size and location of the exit pupil of the system. The variable focus lens is the alternative approach to changing magnification where sharp focus drifts as the focal length is changed, necessitating a focus check after each change. The f-number and image illuminance also change as the focal length is changed.

A macro lens is optimized for shorter conjugates and used for close-up and life-size photography where the magnification is greater than or equal to unity. A close-up attachment is a positive supplementary lens used to reduce the effective focal length of the lens on a camera having a limited focusing range. This allows the camera to be moved closer to a portrait subject to obtain a larger image. A soft focus lens is designed primarily for portrait photography. Spherical aberration can be used to form a diffuse image that is characterized by a sharp core with a halo. The term Bokeh has been given to the quality of blurring obtained outside the object plane. Desirable Bokehs give artistic purpose to less than perfect volume imagery.

Several additional lenses are important to the field of photography. Enlarger lenses are used to produce large prints while reduction lenses are used to generate very small features in photosensitive materials. A field lens is usually a single-element lens that is located at or near the focal plane of another lens. Field lenses have no effect on the focal length of the prime lens in this position. Their functions include flattening
the image plane and redirecting peripheral rays in the image plane to avoid vignetting. Copying, enlarger, and process lenses require very flat fields. An afocal lens contains two groups of elements that are separated by the sum of their focal lengths, placing the principal plane at infinity. The incident and exiting rays are both parallel, but the diameter of the beam is changed. The magnifying power of the system is given by the ratio of the focal lengths of the two components. Keplerian and Galilean telescopes are examples of afocal lens systems.

Modern lenses have significantly evolved with improvements in glass types, anti-reflection coatings, and the development of aspheric lenses. The aspheric lens has one or more non-spherical surface that reduces spherical aberration. Advanced developments in computer-controlled generation and molding of aspheric surfaces have resulted in lenses with fewer elements and greater performance. Advancements in lithographically formed optics have provided the lens designer with a hybrid lens that forms an image with both refraction and diffraction of light. These new lenses contain one or more elements that have a patterned micro-structure on one of its surfaces. Chromatic aberration can be significantly reduced with a hybrid lens because the dispersions from the refractive and diffractive surfaces can be opposite in sign and made to compensate.

Photographic Filters

Filters are optical elements that either attenuate or separate light through several means including absorption, obscuration, dispersion, and interference. Filters can also attenuate light by deflection using refraction and diffraction. A filter that is part of an image-forming system must be of high optical quality in terms of the flatness and thickness uniformity. Filters used in illumination systems such as over lamps or in enlargers can be of a lower quality. Filters are available in various forms such as gelatin, cellulose acetate, polyester, plastics or resins, solid glass, gelatin cemented between glass, patterned gratings, and MLD (multilayer dielectric) coated substrates.

The component of incident radiation that is not reflected or transmitted is referred to as absorbed light. Absorption occurs when radiation incident on a substance is converted to another form of energy such as heat, transformed into light of another color, and emitted. This results in some chemical or electronic change in the substance, as is the case with photographic emulsions and electronic detectors. The amount of light absorption that occurs in a filter is typically wavelength dependent.

Absorption filters are the most common filters in photography and thus they warrant a physical description. An incident beam of light undergoes multiple reflections as it transmits through a filter as shown in Figure 66. The overall transmittance of the filter is defined as

T F = I t /I I

where I i is the incident intensity and It is the transmitted intensity. The internal transmittance accounts for absorption and is defined as

T A ~ I ex /I in

where I in is the intensity transmitted by the input surface and It is the intensity transmitted by the output surface.

For a filter with smooth, uncoated, plane-parallel surfaces, the reflectivity is given by the Fresnel equations. At normal incidence the reflectivity R is polarization independent and given by

R = [(n – 1)/(n + 1)] 2

The surface transmittance (T s ) accounts for losses due to reflections at the material interface.

The expression for surface transmittance, often used by filter manufacturers, is

T s (?) = 2n ? /(n ? 2 + 1);

however, this is an approximation with fractional percent level error for high absorption materials. The overall filter transmittance is the product of internal and the surface transmittances.

T F (?) = T A (?) × T s (?)

Lambert’s law describes the multiplicative operation of filters as

t 1 /t 2 = log[T 1A (?)]/log[T 2A (?)].

For example, doubling the thickness of a filter with 10 percent transmittance yields a more absorbing filter with 1 percent transmittance. The density of a filter is a measure of its ability to attenuate light. Density is defined as the common logarithm of the ratio of the light received by the sample to that transmitted or reflected by the sample; mathematically expressed as

D = log(1/T) = log(I o /I),

where D is the density, I o and I are the incident and output irradiances, respectively, and T is the transmittance. A neutral density filter does not selectively absorb wavelengths in the visible region but absorbs all wavelengths of interest to about the same degree, giving a gray appearance to the filter.

A polarizing filter consists of a layer of aligned molecules that transmit light waves polarized in one plane and absorbs the orthogonal polarization. Multi-layer dielectric coatings and liquid crystal optics are also used to polarize light. Linearly polarizing filters are used to darken a blue sky by removing the light polarized by scattering and to increase contrast in a scene by removal of surface reflection from glass, wood, and plastics.

Interference filters are MLD coatings designed for constructive and destructive interference at specific wavelength intervals. The transmission band depends on the number of layers, their thicknesses, and chemical composition. A dichroic filter contains an MLD coating to transmit and reflect complementary bandwidths or wavelength ranges. An infrared filter is a visually opaque filter that transmits only a very small percentage of the far red wavelengths, but transmits the near-infrared region up to 1 µm. There are two important examples of MLD filters. A hot, or diathermic, mirror reflects heat and transmits the visible wavelengths, whereas a cold mirror will reflect the visible and transmit the infrared energy.

Certain filters operate specifically on ultraviolet light. For example, the haze filter is a very pale yellow filter used to absorb scattered UV and extreme blue light produced by atmospheric scattering. The barrier filter is used over the camera lens in ultraviolet fluorescence photography to absorb direct and scattered UV radiation so that only the fluorescent wavelengths are recorded.

A color filter is a passive absorption filter that absorbs particular regions of the visible spectrum.

A color filter is named by the wavelength it transmits. Color filters are characterized by their spectral transmittance. In addition, optical density and absorption can be plotted as a function of wavelength, giving a spectral characteristic curve as produced by a spectrophotometer. Beer’s law, functionally related to Lambert’s law, states that the spectral absorption of a substance is proportional to both the concentration of the
absorbing material in that substance and to the path length through the material. Changes in concentration can be off set by changes in path length through the filter.

Color filters have many applications in photography. For example, safelight filters are meant to provide the darkroom worker with enough light to navigate while absorbing the emitted wavelengths to which photographic material is sensitive. Color compensating (CC) filters are pale filters used to produce small but significant color shifts in the rendering given by color film. CC filters are available in blue, green, red, yellow, magenta, and cyan colors of various optical densities; indicating absorption of complementary light. They are used for alteration of color balance when exposing color prints, correction of processing shifts, local color effects, reciprocity failure effects, batch differences, and special effects. Color separation filters are a set of three filters—red, green, and blue—each transmitting approximately one-third of the visible spectrum and used when making color separation negatives directly from the scene or from a color transparency.

Contrast filters are used in black and white photography to modify the relative strength of selected colors in a scene relative to their surroundings. The tone may be darkened or lightened by using a contrast filter of the complementary or same color, respectively. Thus, for a red subject on a green background, a red filter would lighten red and darken green, while a green filter would have the reverse effect. Variable contrast filters are used with variable contrast black and white printing paper to vary the image contrast. These filters alter the color of the exposing light to selectively expose the two different emulsions of the paper.

Color conversion filters are blue and orange filters used to modify a light source for which the film is not balanced. For example, to use daylight type color film balanced for 5500K with studio lamps of color temperature 3200K, a deep blue color conversion filter is required. To use artificial light film balanced for 3200K in daylight, an orange conversion filter is required. A light balancing filter is a pale yellow or blue color filter used to give small corrective shifts of color temperature to the light transmitted by a lens when used with color film, especially reversal types. Typical uses are for reducing the blue cast given in open shade on a sunny day or for the variations in color temperature with the season or time of day.

A skylight filter is a pale pink filter used particularly with color reversal film to remove blue color casts in dull weather or subjects in open shade that are illuminated only by blue skylight. Sky filters are used to control the tonal rendering of the sky, especially in the presence of clouds. The contrast between blue sky and white cloud can be progressively increased by the use of yellow, orange, and red filters, which in turn transmit less blue light to which panchromatic film is especially sensitive. A viewing filter, also called the panchromatic vision filter, is a pale purple filter used to show the subject in terms of its approximate brightness values as would be recorded with panchromatic black and white film.

Extending beyond traditional photographic applications, filter technology had significantly advanced near the end of the 20th century. The optics community had pushed the performance of narrowband MLD filters in the development of dense wavelength division multiplexing (DWDM) for the telecommunications industry. DWDM filters now approach the spectral resolution of standard grating monochromators. Narrowband filters can be used to record light produced from new physical phenomena as scientists explore high-temperature matter in the laboratory and astrophysical events in the universe.

Photographic Virtual Reality - History, Photographic Virtual Realty Panoramas, Equipment, Software, Stitching, Source image, Output, Objects, Photographic Virtual Realty Scenes [next] [back] Photographic Higher Education in the United States - The Society for Photographic Education

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or