Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from P-T

Quantization

step size transform algorithms

Definition: Quantization is a technique used in lossy image and video compression algorithms based on DCT, DFT, or DWT.

Quantization can be modeled as

where q is a constant quantization step size, and [ x +0.5] rounds x to the nearest integer x q .

Dequantization can be modeled as:

where x’ is the regenerated integer, which is normally not equal to X . Therefore the quantization process is lossy.

Most common lossy compression algorithms first transform the original signals into a different domain such as Discrete Cosine Transform (DCT), Discrete Fourier Transform (DFT), or Discrete Wavelet Transform (DWT) domain. Then, each of the resulting coefficients is independently quantized.

Quantization is being used by many robust or semi-fragile watermarking algorithms. A robust watermarking algorithm need survive the quantization process, while an ideal semi-fragile watermarking algorithm need provide fragility that is proportional to the quantization step size q .

Quantization has the following property. Let x ?q be the result of quantizing x to an integral multiple of the quantization step size, q :

If a is a read-valued scalar quantity, and q 1 and q 2 are quantization step sizes, with q 2 =q 1 , then

The feature represented by Equation (4) guarantees that if a quantization-based watermark is embedded by using quantization step size q 1 , it will be detectable even after the host signal is re-compressed by using quantization step size q 2 (q 2 =q 1 ) .

[back] Quant, Mary

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or