INTRODUCTION
The effectiveness of a digital watermarking process is appraised according
to the properties of imperceptibility, robustness, computational cost, capacity/strength,
false positive rate, recovery of the watermark, the speed of embedding and retrieval
process (Cox et al., 1997; Jin
and Wang, 2007; Koz and Alatan, 2002; Zeki
and Manaf, 2011). These evaluation criteria are application dependent, given
that diverse application will have different requirements, therefore there is
no unique set of requirements that all watermarking technique must satisfied
(Swanson et al., 1998). On the other hand, researchers
have highlighted that the principal requirements for an effective watermark
are imperceptibility, robustness to attacks and strength/capacity of watermark.
Hence, good watermarking algorithm should reach a balance among these requirements
(Jin and Wang, 2007).
Visual quality (imperceptibility) of watermarked media is most important requirement
in watermarking (Katzenbeisser and Petitcolas, 2000).
It refers to the perceptual transparency (also known as fidelity) of the watermark
(ElGayyar and Von Zur Gathen, 2006; Wu
et al., 2011), that is, the watermarked media is indistinguishable
from the original signal. A watermark embedding procedure is truly imperceptible
if Human Visual System (HVS) cannot distinguish the original data from the watermarked
counterpart (Yang et al., 2008; Abdulfetah
et al., 2009; Phadikar et al., 2007).
In this case, a translucent image is overlaid on the primary image which allows
the primary image to be viewed, but the watermark is hidden to human eye but
can be detected algorithmically.
Robustness of watermark is the ability of the watermark to be resilience to
distortion. That is, to detect the watermark, after the watermarked data has
passed through some signal manipulations (Olanrewaju et
al., 2010). The signal processing operations, for which the watermarking
scheme should be robust, varies from application to application as well. The
exact level of robustness an algorithm must possess cannot be specified without
considering the application situation (Cox, 2008).
Watermarking embedding strength is to analyze the limit of watermark information
that a host signal can accommodate while satisfying the imperceptibility and
robustness of watermarking (Wong and Au, 2003).
Most of the previous works on watermark embedding capacity/strength (Barni
et al., 2002; Moulin and O'sullivan, 2003;
Ramkumar and Akansu, 2001; Servetto
et al., 1998; Priya and Stuwart, 2010; Abdulfetah
et al., 2010) focused on either directly application of Shannon
(2001) or Costa (1983) a wellknown channel capacity
bound.
Recently, the use of Artificial Neural Network (ANN) in estimating the watermark
payload has improved the previous studies. Zhang and Zhang
(2005) studied the bounds of embedding capacity in a blind watermarking
algorithm based on Hopfield neural network.
They found that the basin attractor of neural network attractor and hamming
distance can be use to determine the maximum watermark payload. Mei
et al. (2002) modelled Human Visual System (HVS) using Feed forward
ANNbased imageadaptive method in order to decide the watermark strength of
DCT coefficient. Their technique allows selection of the biggest coefficient
to determine the watermark strength. Jin and Wang (2007)
indicated that in using ANN, different textural features of each DCT block and
luminance of an image can be implored to decide adaptively the watermarking
embedding strength. Similarly ZhiMing et al. (2003)
defined a RBF neural networks based algorithm that control and create the maximum
imageadaptive strength watermark.
In general, ANNbased capacity/strength estimators are well suited for either phase or magnitude of the image, that is, realvalued neural network RVNN which does not work well with complex values. However, to preserve loss of information during embedding, both phase and magnitude of the image is used and this requires a complex valued neural network. Hence, a new CVNN algorithm that determines the embedding strength in order to improve the watermarked image imperceptibility is developed.
MATERIALS AND METHODS
When a watermark is applied at equal strength throughout an image, it will
tend to be more visible in texturally flat regions and less visible in densely
textured regions. In order for the embedded watermark to be more robust against
different types of attacks as well as avoid visual artifacts created due to
uneven embedding, it is essential to embed watermark in a Safe Region (SR) (Olanrewaju
et al., 2010). In other words, users would like to insert the watermark
with maximum strength to avoid being conspicuous to the Human Visual System
(HVS). In this case, local frequency content is use to determine the texture
of the image for identification of embedding region while CVNN is used to decide
adaptively different watermark embedding strength according to diverse textural
features of each block and luminance in the host image. Figure
1 shows the block diagram of the proposed CVNN strength estimate. It consists
of a four stage cascade system.
Local frequency content: Spectra analysis is use to express the correlation
of the spatial location and the frequency distribution of the image. From it,
the local variation of the frequency contents of each block can be known which
in turn enables to identify the changes in the frequencies of the image as a
whole. Frequency domain, such as Fourier Transform contains information from
all parts of the image.

Fig. 1: 
Block diagram of CVNN base strength estimate 
When the image is segmented into nonoverlapping blocks, the local frequency
content of each block can be defined by computing its Fourier Transform. The
general model for obtaining the local frequencies from an image I (x, y) of
size M by N for example (8x8) using the fast version of Discrete Fourier Transform
(DFT) is represented by F (u, v):
for x = 0, 1…M1, y = 0, 1…N1,
Thus given F (u, v), I (x, y) can be obtained back by means of the Inverse 2D DFT (2D IDFT);
where u, v are frequency variables and x, y are spatial variables.
Moving from one textured region to another, the frequency contents of each block changes. The difference in the frequency content of each block is then used as a means of segmentation. Though there is still needs to refine the Fourier descriptors from the frequency content of each block. In this case, the mean of DC component or zero frequency term F (0, 0) is compute using Eq. 3:
where, F (0, 0) is the zero term (DC coefficient) of kth FFT block, N is the number of blocks in the host image.
Complex valued neural network (CVNN): CVNN is use to process Complex
Valued Data (CVD). It is made up of ComplexValued Feed Forward (CVFF) and Complex
BackPropagation (CBP) algorithm. The block diagram of CVFF and CBP is as shown
in Fig. 2. CVNN has been studied and developed by authors
in solving various problems (Aibinu et al., 2010;
Hanna and Mandic, 2003; Hirose,
1992; Kim and Adali, 2001, 2000;
Leung and Haykin, 1991; Amin and
Murase, 2009).

Fig. 2: 
Complex Valued Feed Forward (CVFF) and Complex Backpropagation
(CBP) algorithm 
The CVFF begins by summing up the weighted complexvalued inputs in order to
obtain the threshold value which will be used to represent the internal state
of a given input pattern. All the complex input are computed based on the complex
algebra which results into a complex output through complex weights. While the
CBP algorithm performs an approximation to the global minimization achieved
by the method of steepest descent (Leung and Haykin, 1991).
The net input/output relationship is characterized by nonlinear recursive difference
equation given by:
where, W_{j1 }is the complex synaptic weight connecting complexvalued neuron j in input layer to hidden layer, X_{j} is the complex input signal from input layer, j is the No. of neuron in input layer and b_{i} is a bias value (complexvalued) of neuron i.
It is applied to fully complex multilayer perceptron consisting of many adaptive
neurons, that are capable of universally approximating any complex mapping with
arbitrary accuracy and they converge almost everywhere in a bounded domain of
interest (Kim and Adali, 2003, 2000).
The CVNN error to be propagated backward is defined as the difference between the desired response d (n) and the actual output y (n):
where, [d_{R} (n) + id_{I} (n)] the desired complex is valued data and [y_{R} (n) + iy_{I} (n)] is the output of the CVNN. The CBP algorithm minimises the error function e (n) by recursively adjust the weights and threshold values based on gradient search techniques. Therefore, the global instantaneous squared error is E (n) is given as:
where, e* (n) = e_{R} (n)ie_{I} (n) is the complex conjugate of the error function. If the error between the probe pattern and the trained pattern is less than the goal (defined by user) or epoch, the CVNN will converse to the trained pattern. Once the pattern is well trained, the CVNN can reconstruct the original pattern from the degraded or incomplete pattern.
Bounds of the watermark: In order to keep the visual distortion to minimum
and to optimize the watermarking methods, it is essential to consider the HVS
when developing a watermarking system. The HVS can be modelled with three different
properties; frequency sensitivity, luminance sensitivity and contrast masking
(Mei et al., 2002). The sensitivity of human
eye to various spatial frequencies is determined by the frequency sensitivity.
These frequencies are modelled by CVNN to determine the maximum strength of
the Fast Fourier Transform (FFT) coefficients, that is, the coefficients to
embed watermark. The strength estimator is a CVNN block base. Each block has
a maximum level to be altered; this is accomplished by choosing appropriate
α value during training phase of the CVNN. The α is a multiplicative
factor that control imperceptibility and PSNR value of the watermarked image.
If altered too much it will affect the imperceptibility of the watermarked image.
Figure 3 shows the architecture of the strength estimator
depicting the watermark strength while Fig. 4 indicates the
steps in the strength estimation.
The above CVNN watermarking strength model shown in Fig. 3 is use during embedding to decide subjectively or objectively the embedding strength of each block. For subjective adaptive strength estimating option, user is allowed to choose the embedding strength for each block. The CVNN strength estimate for each block individually, using features of the block such as the texture, background and each block will have its own strength which indicates how much such block will be altered. This technique enables the establishment of watermarking capacity/strength bound as well as enables the achievement of imperceptibility of watermark without degradation. If no strength is allocated for any block during embedding, the estimator will automatically set such block strength to the default α value. This is due to the predefined threshold setting. For any block beyond the threshold, the block will be skipped to the next block. This is an indication that it is not all the block that will be watermarked. If no strength is chosen, the estimator assigns the entire block a default embedding capacity of α.

Fig. 3: 
The structure of CVNN based watermarking strength 

Fig. 4: 
Watermarking strength steps flowchart 
For the objective option, α is a multiplicative factor that control PSNR of the watermarked image just like the subjective estimate. However, the training is model based on Eq. 7 to decide the α values:
Any block which gets CVNN output equal to 0 will be excluded and this way not all blocks are modified which also fulfil the idea of selecting blocks based on model.
The main idea in objective training is that those blocks which have more features
data (e.g., eye, nose) should have more α and plain blocks having only
background or one colour should have less α value. That is, modifications
will be less in image with fewer features. This type of training is basically
called the adaptive watermarking strength. Adaptive training is of advantage
in detector module. This is because the watermark detector will not be disturbed
even if some blocks are with α = 0 since most of the blocks have α>0,
therefore the watermark will be detected. In this way one is distributing/spreading
the watermark according to CVNN model so that during embedding similar blocks
will acquire similar α value for embedding. The α value restricts
the number of points that can be modified in an image, therefore, limit the
capacity of watermarking and subsequently, decides the watermark embedding strength.
Watermark embedding: This watermark is embedded into the FFT coefficients, an 8x8 blocks of a host image. It is a multiplicative embedding defined as:
where, M (m, n) and P (m, n) are the sequences of data from the transformed magnitude and phase of the original image, W (m, n) is the watermark sequence, α is a the CVNN corresponding CVNN factor controlling the embedding strength and M’ (m, n) is the sequence of watermarked. The watermark is generated a pseudorandom number generator using an integer as a seed. This seed serve as a unique secret key for each watermarked image which can be use as a detection key.
The watermark is embedded in SR by defining the watermark as:
where, α and β are the controlling parameters of frequency regions to embed the watermark. After embedding using Eq. 8, the inverse of DFT is applied in order to obtain the watermarked image.
Watermark detection: The detection is a blind detection which does not requires the host image.
The detector model is as shown in Fig. 5. The modified blind
optimum decoding and detection as in study of Barni et
al. (2002) and Khelifi et al. (2006)
where adopted in this study. The detector extracts the hidden information without
knowing it in advance.

Fig. 5: 
Watermark detector 
The detector hypotheses as follows:
dt_{0}: 
No watermark information is embedded into the received image 
dt_{1}: 
Forged watermark information is embedded into the received image 
dt_{2}: 
Correct watermark information is embedded into the received image 
The above detector analysis may be induced into a binary hypothesis test (Briassouli
and Strintzis, 2004) where the two hypotheses concern the existence of a
watermark. Given a watermarked image I_{k}, the detector aims at deciding
whether I_{k} contains a certain watermark w_{k} or not. The
watermark detection can be expressed as a hypothesis test where two hypotheses
are possible:
H_{1:} 
Signal I_{k}, host a watermark w_{k} 
H_{0}: 
Signal I_{k}, does not host a watermark w_{k} 
It should be noted that hypothesis (H_{0}) can take two case; either in the case that the host image I_{k} is not watermarked (hypothesis (dt_{0})) or in the case that the signal I_{k} is watermarked by forged watermark wk where wk = wk (hypothesis (dt_{1})). Therefore, (dt_{0}) and (dt_{1}) are mutually exclusive and their union produces the hypothesis (H_{0}).
The detector is validated using Eq. 9:
It is important to set the appropriate threshold that minimizes the number of false negative and false positive alarms. In order to set an appropriate threshold, the extracted watermark is correlated with a large number (in this study 1200) random watermarks and the embedded watermark.
RESULTS AND DISCUSSION
Watermark was created by embedding a unique bits string sequence generated
from each host image as a message in the host image (imagedependent watermark).

Fig. 6(ab): 
Host image Lena (a) and watermarked Lena and (b) using randomly
generated seed from the host image 
Each host image has a unique watermark, that is, image features were embedded
into itself as an authentication stamp. The colour and chrominance information
based features of the image were extracted for the generation of the watermark.
The use of image dependent watermark provides better security against fraud
especially in tamper detection system as compared to traditional (Fridrich
and Goljan, 1999).
In CVNN based embedding, two distinct features are considered; the adaptive
strength of the block and the combination of both real and imaginary component
for the embedding. These features enable achievement of highly imperceptible
and a robust watermarking system especially against conceivable attacks such
as Wiener filter, Gaussian noise and JPEG compression. The image of Lena is
shown in Fig. 6a and corresponding watermarked Lena in Fig.
6b. The Peak Signaltonoise ration PSNR is 68.18 dB. It was observed that
depending upon the frequency variation of each block, the system provides suitable
imperceptible alterations according to the frequency distribution of the block
content. This indicates that CVNN based embedding is adaptive in which the embedding
strength is based on the frequency component of the block, hence a better imperceptible
watermarked image is achieved. Furthermore the retaining of both real and imaginary
component information during embedding result in high quality of watermarked
image without visual distortion.
Table 1: 
Comparison of watermark imperceptibility measure 

Table 1 shows the comparison results of the proposed CVNN
based strength estimate and other algorithms in terms of imperceptibility of
the watermarked image. This is accomplished by using PSNR value between the
original and the watermarked image expressed in dB indicating the energy of
inserted watermark.
Higher value of PSNR indicates that the two images are similar.
Table 1 shows comparison results of the proposed CVNN strength estimate and other algorithms for test image Lena and Baboon. It can be seen that for both test images, the proposed CVNN based algorithm outperforms other algorithms with a PSNR value of 68.18 dB for Lena while other algorithms only recorded between 3140 dB. As for Baboon, CVNN based algorithm scored 63.45 while others scored between 3242 dB. This performance shows that CVNN based method is about 40% superior to other algorithms. It can therefore be deduce that the newly proposed CVNN algorithm has significant improvement over other algorithms in terms of imperceptibility measure.
Effect of varying watermark strength on imperceptibility: The effect of varying watermark strength on imperceptibility is also considered. Varying the watermark strength, as well as host image used can significantly affect the visual quality of the watermarked image. This is supported by the result obtained in Fig. 7. Imperceptibility decreases as the watermark embedding strength is increased.
For example, when α = 0.1, the PSNR for Lena is observed at 68.18 dB as
the increase to 0.5, the PSNR decreased to 54.23 dB and finally reduced to 47.10
dB when the strength soar to 1. Meanwhile, using same embedding condition as
the Lena above, however, changing the host image to fruits, it is also observed
that as the strength increased, the PSNR valued decreases.

Fig. 7: 
Various watermarking strength showing Watermarked Lena and
Fruits using randomly generated noise from host image as watermark 
As shown; when α = 0.1, PSNR is observed at 62.69 dB, as α increased
from 0.5 to 1, the PSNR decreased from 49.23 to 44.82 dB. The two comparisons
indicated that the PSNR obtained for Lena is higher than that of fruits for
different level of watermarking strength however same trend is noticed in the
increase of α. The higher PSNR of Lena could be due to the composition
and complexity of images at each block which may differ from image to image.
In view of this, each block of each image will require different embedding strengths
and the embedding time vary as well. For Lena; as shown in Fig.
6, a mixture of characteristics such as smooth background, composition of
her eyes while the hat has complex textures and big curves which makes a great
difference from ordinary image with all at region. These characteristics concealed
the watermark bits better than at flat regions. It was also noticed that for
both fruits and Lena, as the watermarking strength increases, there was a decrease
in the PSNR value. This is an indication that small watermarking strength such
as 0.1 produces best visual quality of watermarked images; 68.18 and 62:69 dB
for Lena and Fruits, respectively. Based on the above results, it can be concluded
that 0.1 is the best for batch trained strength selection. Furthermore, using
host image generated watermark will ensure that each image has a unique watermark
for detection.
Effect of watermark strength on various attacks: Figure
810 shows the detector response for watermarked pepper
after Weiner filter, Gaussian noise and JPEG compression attacks. Figure
8 shows the detector response for 1300 watermark keys where only one seed
relates to the correct watermark.

Fig. 8: 
WienerAttack resistance detector 

Fig. 9: 
Gaussian noiseattack resistance detector 

Fig. 10: 
JPEG compressionattack resistance detector 
It is obvious that the response to the true watermark is the largest and highest
peak at 1200. As for Gaussian noise attack resistance, shown in Fig.
9, even though the received image may be perceptually corrupted, the decoder
was still able to select a key at 1200 as a possible match with the one found
and detect it. From Fig. 10, it is observed that despite
the compression, the central peak obtained is important enough to conclude that
the identified key is the one that is sort for. It means that the decoder is
able to detect, upon the reception of the watermarked image. This is an indication
that the algorithm is robust to JPEG compression hence the system is JPEGresistant.
Consequently, the robustness requirement is met. It can be concluded that the
detector was able to detect the watermark even after the attack; therefore the
system is robust against Wiener filter attack, Gaussian noise and JPEG compression.
For this reason, the system resistivity against the conceivable attacks can
be concluded that the strength used for embedding are adaptive hence imperceptibly
and robust system is achieved.
CONCLUSIONS
This study presented a blind watermarking algorithm based on FFTCVNN and discussed the watermark embedding strength. We argue CVNN is an adaptive watermark strength estimator which enables a decisive amount of watermark to be safely embedded in host image without causing visual distortions, this claim is supported by simulation results. The superiority of the CVNN based strength estimator is verified among other compared algorithms, simulations showed that the newly proposed CVNN based yields superior performance (over other algorithms). It is also pertinent to note that, watermark imperceptibility was highly influenced by the type of image, frequency components, training parameters and strength of the watermark. The smaller the CVNN alpha (α) controlling value the better the watermark imperceptibility. When the alpha value becomes bigger the CVNN watermark strength goes out of bound hence Network cannot retrieve the original image correctly. Therefore, the CVNN alpha value restricts the number of points that can be modified in an image. Furthermore, the performance of the algorithm under various conceivable attacks indicated that the proposed algorithm is robust to conceivable attacks.