With the rapid development of the network multimedia system: it becomes very easy to duplicate digital data and the problem of protecting multimedia information becomes more and more important As a solution to this problem, digital watermarking technology is now drawing the attention as a new method of protecting copyrights for digital data
To avoid unauthorized access, digital content is often encrypted and travels
in an encrypted form to the consumer. Although encryption can secure the content
on the way to the consumer, during playback it must be decrypted and this is
the point where it exposed to illegal copying. In these cases, watermarks can
be used to provide some extra protection to the content, since digital watermarking
technique can embed additional information in the digital content (Polyak
and Feher, 2008).
Watermarks are embedded in the digital content in an imperceptible way. Watermarks
can carry some information about the content: owner of the content, metadata,
etc. This protection does not eliminate the need for encryption, but is a supplemental
technique to secure the content, or store additional data. These watermarks
are called robust watermarks because they must survive transformations of the
underlying content, e.g., lossy compression (Polyak and
Most studies up to date have focused on the problem of watermarking of still
images. Nonetheless, there have been a number of publications that address the
related problem of video watermarking. The classical video watermarking approach
is to use a spatial-domain (Mobasseri, 2000) or transform-domain
watermarking technique, such as DCT (Wu et al., 2004;
Tsai and Chang, 2004) and DWT (Deguillaume
et al., 1999; Liu et al., 2002; Gaobo
et al., 2006). In Yangs scheme (Gaobo et
al., 2006), the watermark is scrambled and embedded into the mid frequency
DWT coefficients of each frame of the video. The quality of the scheme is enhanced
by using a genetic algorithm.
Various digital watermarking algorithms have been proposed for image and video
by exploiting different Human Visual System (HVS) models. To be imperceptible,
an image or video watermark should consider the characteristics of the HVS masking
effect of the content. Depending on how HVS models are used, watermarking schemes
can be classified into two major categories: image-independent and image adaptive
watermarking schemes (Wolfgang et al., 1999).
Algorithms falling into the first class are based on the Modulation Transfer
Function (MTF) of the human visual perception only, but do not mention any particular
characteristic of the particular image or video frames. On the other hand, image-adaptive
watermarking schemes depend not only on the frequency response of human eyes,
but also on the properties of the image itself. Consequently, image-adaptive
watermarking schemes can achieve optimal balance between the watermark robustness
and transparency. In other words, image adaptive watermark is perceptually adapted
to local characteristics of the host image or video. Swanson
et al. (1998a) proposed scene-based video dependent watermarking
using perceptual models. It uses spatial masking, frequency masking and temporal
properties of video to embed an invisible and robust watermark. The video frames
from a particular scene are subjected to temporal wavelet transform. The resulting
wavelet coefficient frames are modified by a perceptually shaped pseudorandom
sequence representing the author. Thus the watermark consists of static and
dynamic temporal components. In Swanson schemes (Swanson
et al., 1998a) two methods have been proposed for detecting the watermark
from a test video sequence and both methods employ hypothesis testing. One method
employs index knowledge during detection, i.e., the placement of the test video
frames with original. The second detection method does not require knowledge
of the location of the test frames. In both these detection methods the original
video sequence and the original watermark are required. Chen
et al. (2008) proposed adaptive video watermarking using HVS model
in DCT domain but the method is non blind watermarking. Swanson
et al. (1998b) presented image, audio and video data embedding approaches
and the issues associated with copy and copyright protections. Simitopoulus
et al. (2001) described a new technique for MPEG-1/2 compressed video
streams. Perceptual models are used in the embedding process to preserve the
quality of the video.
In this study a robust adaptive video-watermarking algorithm based on Human
Visual System (HVS) in DWT domain is proposed and Arnold scramble have been
used. And the visual model is designed to generate Just Noticeable Difference
masking by analyzing contrastive masking, texture masking and entropy masking.
In our proposed approach, we embedded the watermark into the video data in order
to enhance the invisibly and robust against the various signal processing attacks
and improve the performance. In addition, in order to avoid the distortion of
the chrominance quality of video data, the watermark is only embedded into the
luminance component of host data. Our proposed approach consists of preprocessing
watermark image, watermark embedding and watermark detection. In embedding process,
Firstly, each frame will be divided into 8 by 8 blocks which will be transformed
into a DWT domain. Then, HVS model is generated and watermark is embedded in
mid-frequency coefficients based on HVS model or JND threshold.
HUMAN VISUAL SYSTEM MODEL (HVS)
In order to design an effective robust watermarking, it is necessary to take
into account the visual effect of embedding a watermark in host video or image.
Human eyes have different sensitivity to different luminance, most sensitive
to middle level luminance, Weber ration keeps const 0.02 within a large range
of middle level (Yang and Sun, 2007). We can use the
Eq. 1 and use ω (u, v) as contrast sensitivity factor.
where, β denotes the maximum of contrast sensitivity, ave (u, v) is the average luminance of Bu,v, I1 and I2 the predetermined threshold value.
As for texture masking, because the variance is bigger at the textures and
edges than that at the smooth region, we can use the variance of wavelet sub-blocks
v(u, v) as texture masking (Abdulfetah et al., 2010).
Lastly, we use entropy masking:
where, λ(u,v) is the entropy of N(xu,v) which is a set of xu,v eight neighbors.
Based on all the above considerations, the effect of HVS masking characteristics is incorporated into the JND threshold value of sub-block as follows: Let ω(u,v) and v(u,v) be ψ and γ, respectively and let λ(u,v) be δ. By combining contrast sensitivity, texture masking and entropy masking, together the final HVS masking can be expressed as follows:
where, α is a parameter which used to control texture masking and Γ is the final JND threshold value of wavelet sub blocks, max ( ) and min ( ) denotes the maximum and minimum set value respectively.
PROPOSED WATERMARK SCHEME
The proposed system consists of three steps preprocessing watermark image, watermark embedding and watermark detection.
Preprocessing watermark image: Scramble binary watermark image in order
to prevent watermark from unauthorized access and increase the security of the
watermark, it is scrambled using Arnold transform (Anqiang
and Jing, 2007). It is defined as follows:
where, (x, y) is the pixel position and (x', y') are the new position after Arnold transform. N is the rank of the image date matrix After Arnold transform, it is impossible to recognize the original watermark image from the scrambled image directly. A watermarks scrambling process show in Fig. 1a, b. Because of Arnold transform periodicity, the original image will be recovered after a period of scramble.
Watermark embedding: The proposed watermarking embedding is based on the DWT and HVS model. The block diagram of embedding algorithm is shown in Fig. 2.
In our method, video frames are taken as the input and scrambled watermark is embedded in each frame by altering the wavelet coefficients of selected middle-frequency coefficients by secret key.
||(a, b) Original watermark image and scrambled watermark respectively
|| Block diagram of watermark embedding
Details of watermark embedding described as follows:
||Convert video frames from RGB frames into YUV components
||For each frame, choose the luminance Y component and partitioned in to 8 by 8 nonoverlapping blocks
||Apply one level DWT in each blocks of frame to get four multiresolution subbands: LL, LH, HL and HH
||Scramble binary watermark image by Arnold transform
||Compute JND by using Eq. 4
||Let vertical and horizontal detail subbands be X and Y, respectively
||Select N coefficients from vertical and horizontal detail subbands using random sequence S1 and S2 which are generated by two secret seed or keys, K1 and K2 and these keys are used to select the coefficients to embed and extract the watermark image
||Embed the watermark as follows:
If W =1 and if X>Y and if X-Y <G and else if X +Y<G then swap X and Y and use the following formulas to embed the watermark:
||Apply the inverse DWT to produce the watermarked luminance component of the frame. Then reconstruct the watermarked frame.
Watermark extraction: The watermark extraction process is the inverse procedure of the watermark embedding process. The original video sequence is not required, only secret key is needed. The watermark extraction procedure is as follows:
||Convert the watermarked (and may be attacked) video frames from RGB frames into YUV components
||Choose the luminance Y component and partitioned in to 8 by blocks and apply the DWT in each blocks to decompose the Y frame into four multiresolution subbands
||Regenerate two different random sequences using the same key or seed as in the embedding process to get select the positions in the vertical and horizontal sub-details of wavelet sub-blocks. Let Xw and Y w be vertical and horizontal detail sub-bands respectively
||Select the position of coefficients by using step 3
||Extract the watermark as follows:
Finally, we do Arnold scramble to W and get W' which is the retrieved watermark data.
To verify the effectiveness of our proposed watermarking scheme, computer simulations are tested on two standard testing video sequences News and Akiyo with the resolution of 177x144. There are 80 frames in each video sequence. Only luminance components are considered during the tests. The performance has been evaluated in terms of the imperceptibility and robustness against various attacks. And we used binary watermark image with size of 20x36. Let, β, I1, I2 be 0.4, 40 and 80, α and δ set to be 2.5 and 0.5.
Invisibility: Taking News video sequences the invisibility of the proposed
algorithm is examined. Figure 3a and b show
the original video frame and watermarked frame, respectively.
The visual quality of watermarked frame is measured by PSNR (Peak Signal Noise
Ratio) value. The average PSNR values for 80 watermarked Akiyo and News video
frames are 39.9814 and 41.0765 dB, respectively and are greater than the PSNR
value reported (Mostafa et al., 2009) which is
almost equal to 39 dB. The quality of watermarked frame is good and can be observed
from Fig. 3b.
Robustness: To measure the robustness several experiments had been done.
The watermarked frame was subjected to different attacks. The chosen attacks
were JPEG, scaling, adding noise, filtering, rotation, cropping etc. The robustness
was evaluated by Normalized Correlation (NC) and results are shown in Table
1, as we can see from Table 1, the proposed method is
highly robust for various attacks and NC value is above 0.7. And for cropping
and salt and pepper noise with 5% noise density attacks the NC value is 0.64.
Figure 4a-c show recover watermark image
against JEPG with different quality factors (range from 85 to 100%).
||(a, b) original video frame and watermarked video frame, respectively
Extracted watermark (a-c) JPEG compression with quality factor
(a) 85, (b) 90 and (c) 100%, respectively
|| Robustness experiments results
Extracted watermark (a, b) Gaussian noise (0.003 and 0.001%)
respectively and (c, d) 5% salt and pepper noise and scaling, respectively
||Extracted watermark (a, b) LPF and histogram equalization
It can be observed that the proposed algorithm is robust against JPEG. The
watermarked frame is also tested against salt and pepper noise, Gaussian noise
and scaling and extracted watermark image is shown in Fig. 5.a-d
and it can be seen clearly that recover watermark has a good similarity with
the original watermark. As we can see from the results the proposed method is
robust against salt and pepper and Gaussian noises with different noise density
and also robust against scaling attack.
The proposed algorithm is also tested against low pass filtering and histogram
equalization. As we can see from Fig. 6a and b
the proposed method is robust against those attacks.
Table 1 displays the NC value between the original watermark and extracted water from attacked watermarked frame. The experimental results demonstrate that the NC value is higher. The robustness of the proposed scheme is evident from the experimental evolution.
|| Robustness Performance
We made comparison with existing algorithm and our method is more robust than the existing method for attacks like Gaussian noise, histogram equalization, rotation and the results are shown in Table 2 and we can see that our method has higher NC value and Table 2 shows the robustness performance with existing algorithm and show better performance than the other algorithm.
A robust video adaptive watermarking method was proposed for copyright protection. The watermark has been performed in DWT domain. The incorporation of HVS model into the proposed scheme has resulted in an efficient watermarking scheme for effective copyright protection of video. The experimental results show the effect of proposed scheme. The proposed method is highly robust for different signal process attacks and has satisfied both the requirements of effective copyright protection scheme: imperceptibility and robustness and show better performance than the other algorithm in terms of both imperceptibility and robustness.
This study was supported by National Natural Science Foundation of China (60736016, 60873198, 60973128 and 60973113), Scientific Research Fund of Hunan Provincial Education Department of China (08C018) and National Basic Research Program of China (2006CB303000, 2009CB326202, 2010CB334706).