Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from P-T » Remote Sensing - Introduction, History, The Electromagnetic Spectrum, Recent Advances

The Basis of Remote Sensing

data image energy digital

While the results of remote sensing are normally seen as images, it is data that are gathered and displayed. Initially, remote sensing data were captured on photographic film using a special camera, however, film was not practical for earthorbiting satellites, although in a few cases, film was used and jettisoned for retrieval on land. In a very few instances the film was captured in mid air. The advent of digital data gathering was a significant development, and it quickly revolutionized the way information was obtained and handled in all branches of remote sensing and beyond.

It is useful to disentangle the stages of data gathering, since at each stage the instrument involved adds its own characteristics to the final output, However, these characteristics can be quantified and calibrated so that the output is free from instrumental artifacts.

A basic remote sensing system is a digital camera, although it would rarely be considered as such. Light entering the camera is focused by a lens, which will absorb, scatter, and reflect some energy, especially in the ultraviolet. The incoming light is filtered at the lens or at the chip to a passband, and the energy passes through a diaphragm to govern its intensity and a shutter to determine the duration and instant of the exposure. The light is focused on a sensor or detector, which is a charge-coupled device (CCD) chip or its equivalent. Incoming photons from the target are converted into electrons as a result of the photoelectric effect, the efficiency of which is in turn wavelength-dependent. The photoelectrons are briefly stored and removed sequentially as a variable current, which is proportional to the number of electrons created in the pixel. The dynamic range of the image and non-image noise is determined in large part by the capacity and the nature of the pixels. The resulting stream of digital signals is passed through a processor that reassembles the image organized in the original spatial arrangement on the array. This is saved as a digital file and can be displayed in a variety of ways. The resulting picture is thus a reconstruction of the luminance and spatial distribution of the elements of the scene, modified by the complete chain of physical, electronic, and data processing influences from object to image.

In the case of consumer-grade cameras, the detector is overlaid with filters to record blue, green, or red light on separate pixels, which read out as separate channels and are ultimately recombined into a true-color image. The same functions are achieved in remote sensing systems in a wide variety of ways.

Most sensors record at a single, narrowly defined wavelength (passband), which can be located anywhere in the electromagnetic spectrum. In remote sensing this is usually in the visible or infrared part of the spectrum, but it can be a stream of radar data that is collected. These monochrome “channels” can be combined into a true-color (if the channels are RGB) or false-color image. The monochrome channels themselves my be colorized to emphasize intensity differences (pseudocolor). The individual channels can also be added, subtracted, or divided into other data channels to extract subtle differences between them. The image data can be obtained in a single shot (as in a digital camera) or scanned at several discrete wavelengths at once with elaborate mirror and filter systems and at scan rates set to give optimum coverage as the satellite or aircraft moves over the terrain. Thus, unlike traditional photography, the data are initially in digital form, so various calibration and corrections can be applied in real time, enabling it to be used quantitatively. The specialized software that is used for this can provide data that is spatially correct, allowing for optical, geometrical, and translational distortions, and that is radiometrically corrected so that specific intensities can be measured. This is important for the accurate characterization of soils and rock types and to study the condition and thickness of vegetation. These concepts are more fully described in the section Multispectral Imaging .

A basic requirement for remote sensing is an energy source to illuminate or provide electromagnetic energy to the target of interest. For passive instruments, this is usually the sun; for active instruments, the sensor itself emits an pulse of energy. As the energy travels from either kind of source to or from the target it may be attenuated by the intervening atmosphere. Ultimately, the energy recorded by the sensor has to be transmitted to a receiving and processing station where the data are processed into an image. The processed image is interpreted, visually or digitally/electronically, to extract information about the target in ways that depend on the particular problem.

In addition, the data channels can also be images taken from slightly different perspectives so that stereo pairs can be made to reveal surface topography. If the technical details of the imaging system are well characterized (as they usually are), photogrammetric techniques can be used to make accurate measurements of surface relief and the other spatial dimensions. Images taken at different times can be superimposed in software to uncover changes with seasonal or other factors and combined with, or made into, maps coded to show an enormous number of variables.

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or

Vote down Vote up

about 6 years ago

good product