Subscribe Now Subscribe Today
Research Article
 

Some new Results on Fuzzy Directed Divergence Measures and their Inequalities



M.A.K. Baig and Mohd Javid Dar
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

In this study, we propose some new fuzzy directed divergence measure and study its particular cases. We also establish some fuzzy coding theorems. Some of the known results are the particular cases of our proposed divergence measure.

Services
Related Articles in ASCI
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

M.A.K. Baig and Mohd Javid Dar, 2014. Some new Results on Fuzzy Directed Divergence Measures and their Inequalities. Asian Journal of Mathematics & Statistics, 7: 12-20.

DOI: 10.3923/ajms.2014.12.20

URL: https://scialert.net/abstract/?doi=ajms.2014.12.20
 
Received: November 19, 2013; Accepted: February 20, 2014; Published: March 29, 2014



INTRODUCTION

Classical information theoretic divergence measures have witnessed the need to study them. Kullback and Leibler (1951) first studied the measure of divergence. Jaynes (1957) introduced the Principle of Maximum Entropy (PME) and emphasized that “choose a distribution which is consistent to with the information available and is uniform as possible”. For implementation of this consideration another advance was needed in the form of a measure of nearness of two probability distribution and it was already provided by Kullback Leibler (1951) in the form of:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(1)

If the distribution Q is uniform. This becomes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(2)

where, P, Q∈Tn and:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Since (Shannon, 1948) Entropy:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(3)

was already available in the literature, so maximizing H is equivalent to minimizing I (P, Q). This is one of the interpretations of PME.

Analyzing Eq. 1 in the following way:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(4)

The second term present in Eq. 4 is called the Kerridge Inaccuracy which is:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(5)

Considering (Kerridge, 1961) inaccuracy, we can interpret (Kullback and Leibler, 1951) measure of divergence as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

i.e.,

= Difference of Kerridge inaccuracy and Shannon’s entropy:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(6)

Since I (P, Q) provides a measure of nearness of P from Q. Take the case of Reliability Theory; here we can consider how much the information is reliable. Because the distribution is the revised distribution/strategies to achieve the goal/objective/target with certain constraints, so optimization theory takes the birth, which is the need of every one.

Hence whenever we come across divergence measures, we are interested to minimize the divergence to make the information available, reliable. Every walk of life is governed with the reliability of information under certain constraints.

Analogous to information theoretic approach, when we arrive at fuzzy sets or fuzziness, we need to study fuzzy divergence measures. As presently, the vast applications of fuzzy information in life and social sciences, interpretational communication, Engineering , Fuzzy Aircraft Control, Medicine, Management and decision making, Computer Sciences, Pattern Recognition and Clustering. Hence the wide applications motivate us to consider Divergence Measures for fuzzy set theory to minimize or maximize or optimize the fuzziness.

Let A = {xi: μA(xi), ∀i = 1, 2, ..., n} and B = {xi: μB (xi), ∀i = 1, 2, ..., n} where, 0<μA(xi)<1 and 0<μB(xi)<1, be two fuzzy sets. The fuzzy divergence corresponding to Kullback and Leibler (1951) has been defined by Bhandari and Pal (1993) as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(7)

The fundamental properties of fuzzy divergence are as follows:

Non-negativity, i.e., D (A||B)≥0
D (A||B) = 0, if A = B
D (A||B) is a convex function in (0, 1)
D (A||B) should not change, when μA(xi) is changed to 1-μA(xi) and μB(xi) to 1-μB(xi)

Bhandari and Pal (1993) have established some properties such as:

D (A||B) = I (A||B)+I (B||A)
  Where:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

D (A∪B||A∩B) = D (A||B)
D (A∪B||C)≤ D (A||C)+D (B||C)
D (A||B)≥ D (A∪B||A)
D (A||B) is maximum if B is the farthest non-fuzzy set of A

Havrda and Charvat (1967) has given the measure of directed divergence as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(8)

Corresponding to Eq. 8, the average code word length can be taken as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(9)

Corresponding to Eq. 8, the fuzzy measure of directed divergence between two fuzzy sets μA(xi) and μB(xi) can taken as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

And its corresponding fuzzy average code word length as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Remark:

As α→1, Eq. 8 tends to Eq. 1
As α→1 and qi = 1, Eq. 8 tends to Eq. 3
As α→1 Eq. 9 tends to average codeword length given as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(10)

As α→1 and qi = 1, Eq. 9 tends to average codeword length corresponding to Shannon’s entropy given as:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(11)

NOISELESS DIRECTED DIVERGENCE CODING THEOREMS

Theorem 1: For all uniquely decipherable codes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(12)

Where:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Proof: By Holders inequality, we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(13)

Set:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

and:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Thus Eq. 13 becomes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Using Kraft’s inequality, we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

or:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

or:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(14)

dividing both sides by t, we get:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Subtracting n from both sides, we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(15)

Taking α = t+1, t = α-1 and:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Equation 15 becomes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(16)

Dividing both sides by α, we get:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

that is Dα≤ Lα which proves the theorem.

Theorem 2: For all uniquely decipherable codes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(17)

Where:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(18)

where, either α≥1, β≤1 or α≤1, β≥1

Proof: Since from Eq. 16, we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(19)

Multiplying both sides by (α-1), we get:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(20)

Changing α to β, Eq. 20 becomes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(21)

Subtracting Eq. 21 from 20, we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Dividing both sides by (β-α), we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

That is Dα, β≤Lα, β which proves the theorem.

Theorem 3: For all uniquely decipherable codes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

where:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(22)

and:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(23)

To prove this theorem, we first prove the following lemma:

Lemma1: For all uniquely decipherable codes:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Proof of the lemma: From (3) we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Subtracting ‘n’ from both sides, we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Taking α = t+1, t = α-1 and:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(24)

Which proves the Lemma.

Proof of the Theorem 3: Changing α to β in (Eq. 24), we get:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities
(25)

dividing Eq. 25 from 24, we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

Dividing both sides by β-α we have:

Image for - Some new Results on Fuzzy Directed Divergence Measures and their Inequalities

that is D΄α, β≤L΄α, β. which proves the theorem.

REFERENCES

1:  Bhandari, D. and N.R. Pal, 1993. Some new information measures for fuzzy sets. Inform. Sci., 67: 209-228.
CrossRef  |  Direct Link  |  

2:  Jaynes, E.T., 1957. Information theory and statistical mechanics. Phys. Rev., 106: 620-630.
CrossRef  |  Direct Link  |  

3:  Havrda, J. and F. Charvat, 1967. Quantification method of classification processes: Concept of structural α-entropy. Kybernatika, 3: 30-35.
Direct Link  |  

4:  Kerridge, D.F., 1961. Inaccuracy and inference. J. R. Stat. Soc. Ser. B, 23: 184-194.

5:  Kullback, S. and R.A. Leibler, 1951. On information and sufficiency. Ann. Math. Statist., 22: 1-164.
Direct Link  |  

6:  Shannon, C.E., 1948. A mathematical theory of communication. Bell Syst. Tech. J., 27: 379-423.
Direct Link  |  

©  2021 Science Alert. All Rights Reserved