Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from A-E

Digital Watermarking Based on Neural Network Technology for Grayscale Images - BACKGROUND: WATERMARKING, NEURAL NETWORK (NN) TO ENHANCE WATERMARKING

embedded extracted figure data

Jeanne Chen
HungKuang University, Taiwan

Tung-Shou Chen
National Taichung Institute of Technology, Taiwan

Keh-Jian Ma
National Taichung Institute of Technology, Taiwan

Pin-Hsin Wang
National Taichung Institute of Technology, Taiwan

BACKGROUND: WATERMARKING

Great advancements made on information and network technologies have brought on much activity on the Internet. Traditional methods of trading and communication are so revolutionized that everything is quasi-online. Amidst the rush to be online emerge the urgent need to protect the massive volumes of data passing through the Internet daily. A highly dependable and secure Internet environment is therefore of utmost importance.

A lot of research has been done in which watermarking has become an important field of research for protecting data. Watermarking is a technique to hide or embed watermark data in a host data (Cox & Miller, 2001; Martin & Kutter, 2001). The embedded watermark could be retrieved at an appropriate time to be used as proof for rightful ownership when one is in question. Both the watermark and the host data could be any media format such as document, still-image, video, audio, and more. The main concentration for this article is on the still-image. The criteria for watermarking technique are imperceptibility and robustness (Cox, Miller, & Bloom, 2000; Hwang, Chang, & Hwang, 2000; Silva & Mayer, 2003). The embedded watermark must not be easily detectable (imperceptible) so as to discourage hacking; once detected, it must not be easy to decrypt. In some instances where hacking are acts with malicious intents—the watermark must withstand (robustness) these attacks and other attacks such as normal image manipulations like sizing, rotations, cropping, and more (Du, Lee, Lee, & Suh, 2002; Lin, Wu, Bloom, Cox, Miller, & Lui, 2001). Robustness here implies that the watermark can still be recovered after suffering attacks of sorts (Miller, Doerr, & Cox, 2004; Niu, Lu, & Sun, 2000; Silva & Mayer, 2003).

The host image can be manipulated for watermark embedding; either in spatial or frequency domain (Gonzalez & Wood, 2002). In spatial domain, an image is perceived as is—but is digitally represented in terms of pixels. Each pixel reflects a spectrum of colors that is perceptible by the human eye system (HVS). In frequency domain, the image is confined to high, medium, and low frequencies with the HVS being less sensitive to high frequency and more sensitive to low frequency. The proposed watermark technique for this article is interested in embedding a watermark in the frequency domain. In the frequency domain, the embedded watermark is less vulnerable to attacks; be it intentional or unintentional. Therefore, the image will be transformed to its frequency domain using the discrete cosine transformation (DCT) (Liu et al., 2002; Hwang et al., 2000).

Furthermore, we are also interested in applying the neural network technology to embed and extract a watermark. By applying the neural network to embed the watermark, we hope to disperse the watermark such that it can be more securely hidden, not easy to decrypt and imperceptible. Neural network will again be applied to extract the embedded watermark bits. The train and retrain characteristic could be used to increase the amount of extracted watermark; thereby increasing the chances of getting better quality extracted watermark.

NEURAL NETWORK (NN) TO ENHANCE WATERMARKING

Some of the most popular neural networks (NN) include the Back-propagation Network (BPN), the Probabilistic Neural Network (PNN) and the Counter Propagation Network (CPN). Although the different NNs are devoted to different applications, the basic fundamentals remain in all applications as to how to best apply NN’s dynamic learning and error tolerant capacity to get the most accurate results (Davis & Najarian, 2001; Zhang, Wang, & Xiong, 2002). For this article, we are only interested in BPN. BPN is unique for its train and self-train characteristic which can produce more precise trained values. This special characteristic is useful for improving on the watermarking technique such as secure embedment (Hwang et al., 2000) or enhancing the extracted watermark (Tsai, Cheng, & Yu, 2000).

Using NN to Enhance Watermark Embedment

In Hwang et al.’s (2000) method, the watermark will be embedded in the frequency domain. As shown in Figure 1, the host image will first be divided into blocks. These blocks will be individually discrete cosine transformed (DCT) to their frequency domains. The blocks will be scanned in zigzag order to be input as variables to BPN to train for anticipated outputs. Figure 2 shows an example with inputs, {AC1, AC2, …, AC9} with anticipated output {AC12}. The anticipated outputs will be the weighted values used to retrain for a new set of outputs. The final outputs will be paired with the watermark bits {0, 1} to complete the embedding process. Finally, the image will undergo inverse DCT (IDCT) back to spatial domain.

For extraction, the same process will be repeated to divide the image into blocks. Each block will undergo DCT to the frequency domain. The weighted values recorded during BPN training will be used to scan the blocks in zigzag order for the watermark bits. No NN will be applied to the extracted watermark. After watermark had been extracted, the image will be inverse DCT back to spatial domain.

Using NN to enhance the extracted watermark.

In Tsai et al.’s (2000) proposed method the watermark will be first translated into a grouping of 0 and 1 bits. As bits from the grouping are being embedded, they will be tagged. As shown in Figure 3, a 32×32 black and white watermark was embedded into the blue pixels of a 480×512 color host image in spatial domain. The watermark was first converted into bits group S which will be randomly encrypted as they were being embedded and tagged by the H bits. H will be embedded together with the watermark bits.

For the extraction process, the embedded H will be used to identify the tag locations on the watermark. Details on H are available only to the authorized users. NN will be applied to extract a more accurate H and then, S to get a better quality in the extracted watermark. The same identical random configuration will be used to locate the embedded data H . Once an embedded data had been located, its adjacent grids (Figure 4) together with the embedded data will be used to train a network model. Once the NN training is completed, the process is repeated for the embedded watermark data S . Outputs from NN will decide on embedding 0’s or 1’s.

PROPOSED WATERMARKING TECHNIQUE

The proposed watermarking technique will embed watermark in the frequency domain and encrypt hiding with BPN as in Hwang et al. (2000), and enhance the extracted watermark with BPN as in Tsai et al. (2000). By combining the best ideas from both groups of researchers, we have a robust and imperceptible watermark algorithm.

As seen in Figure 5, a grayscale image will be embedded with a 44×44 grayscale watermark (Figure 6). The watermark is first divided into units of 4×4 blocks (Figure 7). Each block will be DCT transformed from spatial to frequency domain. A random sequencer is used to disperse the watermark before embedding. The sequencing rule makes it difficult for any hacker to tamper with the embedded data even if its location is known. Once dispersing is completed, the coefficients from DCT will become disarrayed. The coefficients will be grouped together such that; coefficients greater than 320 will be grouped to 320, and those below -310 to -310. Those falling within the range of -310 and 320 will be divided into groups of 10 such that; coefficients within 5 and 15 to 10, 15 and 25 to 20, and so on. The grouping set will be a subset of {-310, -300, …, 10, 20, …, 310, 320}. Next the elements will be paired such that; –310 with 0, -300 with 1, and so on until 320 to 63. The pairing numbers will be converted into a six-bit binary expression for 0 = {000000}, 1 = {000001}, and so on. {000000} will be embedded instead of -310, and {111111} instead of 320.

Once coding for the watermark is completed, embedding into the original 1024×1024 grayscale image begins. Similar to Tsai et al.’s (2000) H 2 grouping information, the watermark grouping information will be embedded before the coded watermark. The grouping will be the training pattern for NN. Figure 8 illustrates the embedding process. First, the original image is divided into 8×8 blocks. Then each block will be DCT transformed from spatial to frequency domain. Next, if bit to embed is 0 then the anticipated output AC 12" will be modified to –20; otherwise to 20. Finally, the image with embedded data will be IDCT back to spatial domain.


The watermark will be extracted by using BPN to enhance the quality of the restored watermark. The training process to get the best output. As shown in Figure 9, the DC blocks containing the extra information were designated the { AC 1", AC 2", …, AC 9"} training pattern. The trained output will help in identifying the 0’s or 1’s that were embedded. Each group of training patterns can be paired with an anticipated output. The sigmoid function (Hwang et al., 2000) for IDCT will require DC and AC values to fall between zero and one as shown in Eq. (1). where c is {DC, AC1", AC2", AC3", AC4", AC5", AC6", AC7", AC8", AC9", AC12"}, and j is the j th neural node.

Once NN training is completed, the blocks that actually contain the watermark { DC , AC 1", AC 2", …, AC 9"},will be input variables to NN. The average value of the output computed through NN is defined by Eq.(2) and taken as the threshold value. As in Eq.(3), the embedded data will be 0 when the threshold is equal or greater than the computed AC 12""; otherwise it is 1. Once the data in each block is extracted, a six-bit grouping is used to convert the extracted data into decimal and reverted back to between -310 and 320. The same random sequencing rule is used to recreate the dispersed watermark, which will be IDCT from the frequency to spatial domain. This completes the extraction process (see Figure 10).

EXPERIMENTAL RESULTS AND ANALYSIS

A 1024×1024 grayscale host image and a 44×44 grayscale watermark will be used in the experiments. The Peak signal-to-noise ratio (PSNR) will be used to estimate the quality of the extracted digital watermark. An image quality of PSNR 30dB will be considered as acceptable if the image is visually acceptable, too.

When the image with embedded data in Figure 11(b) is compared with the original in Figure 11(a), very little difference is detected. Also, Figure 11(b) has a high PSNR of 40.1 dB. The watermark in Figure 12(b) is extracted from Figure 11(b). Although it has a low PSNR of 19.2 dB and showed some artifacts, it is still considered visually acceptable.

Severe Blur Attacks

The image with embedded data in Figure 13(a) is attacked once with blur while Figure 13(b) is attacked with blur twice. Their respective extracted watermarks show some artifacts, but are still considered as visually acceptable.

Severe Sharpen Attacks

Figure 14(a) illustrates the sharpen-twice attack, while Figure 14(b) is sharpened thrice. Their respective watermarks are successfully extracted but showed some artifacts. However, the wordings on the watermarks are clear. Therefore the extracted watermarks are considered as visually acceptable.

Lossy Compression Attacks

The images in Figures 15(a), (b), © and (d) are attacked with different rates of JPEG lossy compression. Their respective extracted watermarks showed the most distortions in the higher compression rate for Figure 15 (d). However, the wordings on the watermark are still recognizable.

CONCLUSION

From the above experiments, the host image was attacked with different levels of blur, sharpen and JPEG lossy compression. In each case, the extracted watermark has some distortions. However, in all cases the most significant information (wordings) is intact and visually acceptable. Therefore, the watermark is robust. Furthermore, all the images have PSNRs well-above 30dBs and all visually similar to the original image. These proved that the proposed watermarking technique is robust and imperceptible.

As results have shown, the proposed improved method can effectively withstand attacks from common imaging processing. The watermark could be safely extracted to be used proof for rightful ownership. Its applications could be to protect copyrights of artworks on display on the internet, protection against illegal distributions and more. Future work could be to apply NN to retrain the extracted watermark to reduce distortions—the proposed method only used it to improve extraction. Another possible work could be to embed trace data as watermarks to prevent legal users from illegally redistributing data (Memon & Wong, 2001; Mukherjee, Maitra, & Acton, 2004).

Digital Watermarking for Multimedia Security Management - INTRODUCTION, BACKGROUND, WATERMARKING SCHEMES AND THEIR APPLICATIONS, Robust Watermarking Schemes, Fragile Watermarking Schemes [next] [back] Digital Watermarking - Common watermarking techniques, Important Parameters, Applied mechanisms, Applications, Invertible watermarking, Content-fragile watermarking

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or