We consider the following set of positive real numbers:.
Boekee and Van der Lubbe (1980) studied R-Norm entropy
of distribution P is given by:
Actually, the R-norm entropy (1) is a real function from ΔN→R*,
where N≥2. This measure is different from Shannon (1948),
Renyi (1961), Havrda and Charvat
(1967) and Daroczy (1970). The most interesting
property of this measure is that when R→1, R-norm information measure (Eq.
1) approaches to Shannon (1948) entropy and in case
R→∞, RH (P)→(1-max pi), i = 1,2,..., N.
Setting r = 1/R in Eq. 1, we get:
which is a measure mentioned by Arimoto (1971) as an
example of a generalized class of information measure. It may be marked that
Eq. 2 also approaches to Shannons entropy as r→1.
For PεΔN, Shannon (1948) measure
of information is defined as:
The measure (Eq. 3) has been generalized by various authors and has found applications in various disciplines such as economics, accounting, crime and physics etc.
For P, QεΔN, Kerridge (1961) introduced
a quantity known as inaccuracy defined as:
There is well known relation between H (P) and H (P; Q) which is given by:
The Eq. 5 is known as Shannon inequality and its importance is well known in coding theory.
In the literature of information theory, there are many approaches to extend
the Eq. 5 for other measures. Nath and
Mittal (1973) extended the relation (5) in the case of entropy of type β.
Using the method of Nath and Mittal (1973), Van
der Lubbe (1978) generalized (Eq. 5) in the case of Renyi's
entropy. On the other hand, using the method of Campbell
(1965), generalized (Eq. 5) for the case of entropy of
type β. Using these generalizations, coding theorems are proved by these
authors for these measures.
The mathematical theory of information is usually interested in measuring quantities
related to the concept of information. Shannon (1948)
fundamental concept of entropy has been used in different directions by the
different authors such as Zheng et al. (2008),
Haouas et al. (2008), Yan
and Zheng (2009), Kumar and Choudhary (2011) and
Wang (2011) etc.
The objective of this study is to study generalization of Eq.
5 for Eq. 1 and three different kinds of R-Norm inaccuracies
with the help of Shisha (1967) Holders inequality.
GENERALIZATION OF SHANNON INEQUALITY
R-Norm inaccuracies: The three different kinds of R-Norm inaccuracy measures are defined as:
α = 1, 2 and 3, where:
Now we are interested to extend the result of Eq. 3 in a fashion such as:
where, α = 1, 2 and 3.
Provided the following conditions holds.
Equality in Eq. 7 holds if and only if P = Q i.e., pi = qi for α = 1 and 3.
where, PR is given as:
Since HR (P) ≠ αHR (P; Q), we will not interpret Eq. 6 as a measure of inaccuracy. But αHR (P; Q) is a generalization of the measure of inaccuracy defined in Eq. 1. In spite of the fact that αHR (P; Q) is not a measure of inaccuracy in its usual sense, its study is justified because it leads to meaningful new measures of length. In the following theorem, we will determine a relation between Eq. 1 and Eq. 6 of the type Eq. 5.
Since Eq. 6 is not a measure of inaccuracy in its usual sense, we will call the generalized relation as pseudo-generalization of the Shannon inequality for R-Norm entropy.
Theorem 1: We have:
i.e., Eq. 7 and under the condition Eq. 8,
9 and 10.
Proof: For different values of α = 1, 2 and 3, we shall prove Eq. 12.
Case 1: For α = 1,
with equality iff P = Q i.e.,qi = pi:
Proof: We use Shisha (1967) Holders inequality:
for all xi≥0, yi≥0, i = 1,2,...,N when P<1 (≠1) and p-1 + q-1 = 1, with equality if and only if there exists a positive number c such that:
in Eq. 15 and using Eq. 13, we get:
Multiplying both sides of Eq. 17 by:
and raising power both sides by 1/R, we get:
as R>1, gives Eq. 14.
For 0<R<1, we can prove Eq. 14 on the similar lines.
Case II: For α = 2,
with equality iff Q = PR i.e.,:
Proof: In inequality Eq. 15 take:
From Eq. 18 and 20, we have:
with equality iff
∀i = 1,2,..., N.
Raising both sides of Eq. 21 by 1/R, simplification for
as R>1, gives Eq. 19.
as 0<R<1, gives Eq. 19. i.e., HR (P)≤2HR (P; Q), R>0 (≠1).
Case III: For α = 3,
with equality iff
Proof: In inequality Eq. 15 take
with equality iff pi = qi, ∀i = 1, 2,..., N
From Eq. 22 and 24, we have:
with equality iff pi = qi, ∀ i = 1, 2,..., N.
Raising both sides of Eq. 25 by 1/R, simplification for:
as R>1, gives Eq. 23.
as 0<R<1, gives Eq. 23.
i.e., HR (p)≤3HR (P; Q), R>0 (≠1)
From proposition 1, 2 and 3, we get the proof of the theorem 1.
If R→1; Eq. 7 becomes 5. i.e., H (p)≤H (P; Q), which
is Shannon inequality
The measure HR (P) and αHR (P; Q) (α
= 1, 2) are one parametric generalizations of Shannon entropy and of Kerridge
inaccuracy respectively, both studied by Van der Lubbe (1981).
The measure αHR (P; Q); (α = 1, 2 and 3) are
three different one parametric generalizations of Kerridge
(1961) inaccuracy. Proposition 1, 2 and 3 gives inequalities among these
measures which we called generalized Shannon inequalities.