In this study, we study one parametric generalization measure of H (P) and H (P; Q). For the measure H (P; Q) we give three different kind of generalizations. These generalizations are R-Norm entropy and R-Norm inaccuracies. The Shannon-Gibbs type inequality has been generalized in different way using Holders inequality for R-Norm information measure and three different kinds of inaccuracy.
PDF Abstract XML References Citation
How to cite this article
We consider the following set of positive real numbers:. Boekee and Van der Lubbe (1980) studied R-Norm entropy of distribution P is given by:
Actually, the R-norm entropy (1) is a real function from ΔN→R*, where N≥2. This measure is different from Shannon (1948), Renyi (1961), Havrda and Charvat (1967) and Daroczy (1970). The most interesting property of this measure is that when R→1, R-norm information measure (Eq. 1) approaches to Shannon (1948) entropy and in case R→∞, RH (P)→(1-max pi), i = 1,2,..., N.
Setting r = 1/R in Eq. 1, we get:
For PεΔN, Shannon (1948) measure of information is defined as:
The measure (Eq. 3) has been generalized by various authors and has found applications in various disciplines such as economics, accounting, crime and physics etc.
For P, QεΔN, Kerridge (1961) introduced a quantity known as inaccuracy defined as:
There is well known relation between H (P) and H (P; Q) which is given by:
The Eq. 5 is known as Shannon inequality and its importance is well known in coding theory.
Using the method of Nath and Mittal (1973), Van der Lubbe (1978) generalized (Eq. 5) in the case of Renyi's entropy. On the other hand, using the method of Campbell (1965), generalized (Eq. 5) for the case of entropy of type β. Using these generalizations, coding theorems are proved by these authors for these measures.
The mathematical theory of information is usually interested in measuring quantities related to the concept of information. Shannon (1948) fundamental concept of entropy has been used in different directions by the different authors such as Zheng et al. (2008), Haouas et al. (2008), Yan and Zheng (2009), Kumar and Choudhary (2011) and Wang (2011) etc.
GENERALIZATION OF SHANNON INEQUALITY
R-Norm inaccuracies: The three different kinds of R-Norm inaccuracy measures are defined as:
α = 1, 2 and 3, where:
Now we are interested to extend the result of Eq. 3 in a fashion such as:
where, α = 1, 2 and 3.
Provided the following conditions holds.
Equality in Eq. 7 holds if and only if P = Q i.e., pi = qi for α = 1 and 3.
where, PR is given as:
Since HR (P) ≠ αHR (P; Q), we will not interpret Eq. 6 as a measure of inaccuracy. But αHR (P; Q) is a generalization of the measure of inaccuracy defined in Eq. 1. In spite of the fact that αHR (P; Q) is not a measure of inaccuracy in its usual sense, its study is justified because it leads to meaningful new measures of length. In the following theorem, we will determine a relation between Eq. 1 and Eq. 6 of the type Eq. 5.
Since Eq. 6 is not a measure of inaccuracy in its usual sense, we will call the generalized relation as pseudo-generalization of the Shannon inequality for R-Norm entropy.
Theorem 1: We have:
Proof: For different values of α = 1, 2 and 3, we shall prove Eq. 12.
Case 1: For α = 1,
with equality iff P = Q i.e.,qi = pi:
Proof: We use Shisha (1967) Holders inequality:
for all xi≥0, yi≥0, i = 1,2,...,N when P<1 (≠1) and p-1 + q-1 = 1, with equality if and only if there exists a positive number c such that:
Multiplying both sides of Eq. 17 by:
and raising power both sides by 1/R, we get:
as R>1, gives Eq. 14.
For 0<R<1, we can prove Eq. 14 on the similar lines.
Case II: For α = 2,
with equality iff Q = PR i.e.,:
Proof: In inequality Eq. 15 take:
with equality iff
∀i = 1,2,..., N.
Raising both sides of Eq. 21 by 1/R, simplification for
as R>1, gives Eq. 19.
as 0<R<1, gives Eq. 19. i.e., HR (P)≤2HR (P; Q), R>0 (≠1).
Case III: For α = 3,
with equality iff
Proof: In inequality Eq. 15 take
with equality iff pi = qi, ∀i = 1, 2,..., N
with equality iff pi = qi, ∀ i = 1, 2,..., N.
Raising both sides of Eq. 25 by 1/R, simplification for:
as R>1, gives Eq. 23.
as 0<R<1, gives Eq. 23.
i.e., HR (p)≤3HR (P; Q), R>0 (≠1)
From proposition 1, 2 and 3, we get the proof of the theorem 1.
If R→1; Eq. 7 becomes 5. i.e., H (p)≤H (P; Q), which is Shannon inequality
The measure HR (P) and αHR (P; Q) (α = 1, 2) are one parametric generalizations of Shannon entropy and of Kerridge inaccuracy respectively, both studied by Van der Lubbe (1981). The measure αHR (P; Q); (α = 1, 2 and 3) are three different one parametric generalizations of Kerridge (1961) inaccuracy. Proposition 1, 2 and 3 gives inequalities among these measures which we called generalized Shannon inequalities.
- Campbell, L.L., 1965. A coding theorem and renyi's entropy. Inform. Control, 8: 423-429.
- Renyi, A., 1961. On measure of entropy and information. Proc. 4th Berkeley Symp. Maths. Stat. Prob., 1: 547-561.
- Shannon, C.E., 1948. A mathematical theory of communication. Bell Syst. Tech. J., 27: 379-423.
- Arimoto, S., 1971. Information-theoretic considerations on estimation problems. Inform. Control, 19: 181-194.