**INTRODUCTION**

We consider the following set of positive real numbers:.
Boekee and Van der Lubbe (1980) studied R-Norm entropy
of distribution P is given by:

Actually, the R-norm entropy (1) is a real function from Δ_{N}→R*,
where N≥2. This measure is different from Shannon (1948),
Renyi (1961), Havrda and Charvat
(1967) and Daroczy (1970). The most interesting
property of this measure is that when R→1, R-norm information measure (Eq.
1) approaches to Shannon (1948) entropy and in case
R→∞, _{R}H (P)→(1-max p_{i}), i = 1,2,..., N.

Setting r = 1/R in Eq. 1, we get:

which is a measure mentioned by Arimoto (1971) as an
example of a generalized class of information measure. It may be marked that
Eq. 2 also approaches to Shannon’s entropy as r→1.

For PεΔ_{N}, Shannon (1948) measure
of information is defined as:

The measure (Eq. 3) has been generalized by various authors and has found applications in various disciplines such as economics, accounting, crime and physics etc.

For P, QεΔ_{N}, Kerridge (1961) introduced
a quantity known as inaccuracy defined as:

There is well known relation between H (P) and H (P; Q) which is given by:

The Eq. 5 is known as Shannon inequality and its importance is well known in coding theory.

In the literature of information theory, there are many approaches to extend
the Eq. 5 for other measures. Nath and
Mittal (1973) extended the relation (5) in the case of entropy of type β.

Using the method of Nath and Mittal (1973), Van
der Lubbe (1978) generalized (Eq. 5) in the case of Renyi's
entropy. On the other hand, using the method of Campbell
(1965), generalized (Eq. 5) for the case of entropy of
type β. Using these generalizations, coding theorems are proved by these
authors for these measures.

The mathematical theory of information is usually interested in measuring quantities
related to the concept of information. Shannon (1948)
fundamental concept of entropy has been used in different directions by the
different authors such as Zheng * et al*. (2008),
Haouas *et al*. (2008), Yan
and Zheng (2009), Kumar and Choudhary (2011) and
Wang (2011) etc.

The objective of this study is to study generalization of Eq.
5 for Eq. 1 and three different kinds of R-Norm inaccuracies
with the help of Shisha (1967) Holder’s inequality.

**GENERALIZATION OF SHANNON INEQUALITY**

**R-Norm inaccuracies:** The three different kinds of R-Norm inaccuracy measures are defined as:

α = 1, 2 and 3, where:

Now we are interested to extend the result of Eq. 3 in a fashion such as:

where, α = 1, 2 and 3.

Provided the following conditions holds.

Equality in Eq. 7 holds if and only if P = Q i.e., p_{i} = q_{i} for α = 1 and 3.

And:

where, P^{R} is given as:

Since H_{R} (P) ≠ ^{α}H_{R} (P; Q), we will not interpret Eq. 6 as a measure of inaccuracy. But ^{α}H_{R} (P; Q) is a generalization of the measure of inaccuracy defined in Eq. 1. In spite of the fact that ^{α}H_{R} (P; Q) is not a measure of inaccuracy in its usual sense, its study is justified because it leads to meaningful new measures of length. In the following theorem, we will determine a relation between Eq. 1 and Eq. 6 of the type Eq. 5.

Since Eq. 6 is not a measure of inaccuracy in its usual sense, we will call the generalized relation as pseudo-generalization of the Shannon inequality for R-Norm entropy.

**Theorem 1:** We have:

i.e., Eq. 7 and under the condition Eq. 8,
9 and 10.

**Proof:** For different values of α = 1, 2 and 3, we shall prove Eq. 12.

**Case 1:** For α = 1,

then

with equality iff P = Q i.e.,q_{i} = p_{i}:

**Proof:** We use Shisha (1967) Holder’s inequality:

for all x_{i}≥0, y_{i}≥0, i = 1,2,...,N when P<1 (≠1) and p^{-1} + q^{-1} = 1, with equality if and only if there exists a positive number c such that:

Setting:

in Eq. 15 and using Eq. 13, we get:

Multiplying both sides of Eq. 17 by:

and raising power both sides by 1/R, we get:

Simplification for

as R>1, gives Eq. 14.

For 0<R<1, we can prove Eq. 14 on the similar lines.

**Case II:** For α = 2,

then

with equality iff Q = P^{R} i.e.,:

**Proof:** In inequality Eq. 15 take:

we get:

From Eq. 18 and 20, we have:

i.e.,

with equality iff

∀i = 1,2,..., N.

Raising both sides of Eq. 21 by 1/R, simplification for

as R>1, gives Eq. 19.

For:

as 0<R<1, gives Eq. 19. i.e., H_{R} (P)≤^{2}H_{R} (P; Q), R>0 (≠1).

**Case III:** For α = 3,

then:

with equality iff

**Proof:** In inequality Eq. 15 take

we get:

with equality iff p_{i} = q_{i}, ∀i = 1, 2,..., N

From Eq. 22 and 24, we have:

i.e.,

with equality iff p_{i} = q_{i}, ∀ i = 1, 2,..., N.

Raising both sides of Eq. 25 by 1/R, simplification for:

as R>1, gives Eq. 23.

For:

as 0<R<1, gives Eq. 23.

i.e., H_{R} (p)≤^{3}H_{R} (P; Q), R>0 (≠1)

From proposition 1, 2 and 3, we get the proof of the theorem 1.

**Remark:**

If R→1; Eq. 7 becomes 5. i.e., H (p)≤H (P; Q), which
is Shannon inequality

**CONCLUSION**

The measure H_{R} (P) and ^{α}H_{R} (P; Q) (α
= 1, 2) are one parametric generalizations of Shannon entropy and of Kerridge
inaccuracy respectively, both studied by Van der Lubbe (1981).
The measure ^{ α}H_{R} (P; Q); (α = 1, 2 and 3) are
three different one parametric generalizations of Kerridge
(1961) inaccuracy. Proposition 1, 2 and 3 gives inequalities among these
measures which we called generalized Shannon inequalities.