Subscribe Now Subscribe Today
Research Article
 

R-Norm Shannon-Gibbs Type Inequality



Satish Kumar and Arun Choudhary
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

In this study, we study one parametric generalization measure of H (P) and H (P; Q). For the measure H (P; Q) we give three different kind of generalizations. These generalizations are R-Norm entropy and R-Norm inaccuracies. The Shannon-Gibbs type inequality has been generalized in different way using Holder’s inequality for R-Norm information measure and three different kinds of inaccuracy.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

Satish Kumar and Arun Choudhary, 2011. R-Norm Shannon-Gibbs Type Inequality. Journal of Applied Sciences, 11: 2866-2869.

DOI: 10.3923/jas.2011.2866.2869

URL: https://scialert.net/abstract/?doi=jas.2011.2866.2869
 
Received: April 30, 2011; Accepted: May 26, 2011; Published: July 02, 2011



INTRODUCTION

We consider the following set of positive real numbers:Image for - R-Norm Shannon-Gibbs Type InequalityImage for - R-Norm Shannon-Gibbs Type Inequality. Boekee and Van der Lubbe (1980) studied R-Norm entropy of distribution P is given by:

Image for - R-Norm Shannon-Gibbs Type Inequality
(1)

Actually, the R-norm entropy (1) is a real function from ΔN→R*, where N≥2. This measure is different from Shannon (1948), Renyi (1961), Havrda and Charvat (1967) and Daroczy (1970). The most interesting property of this measure is that when R→1, R-norm information measure (Eq. 1) approaches to Shannon (1948) entropy and in case R→∞, RH (P)→(1-max pi), i = 1,2,..., N.

Setting r = 1/R in Eq. 1, we get:

Image for - R-Norm Shannon-Gibbs Type Inequality
(2)

which is a measure mentioned by Arimoto (1971) as an example of a generalized class of information measure. It may be marked that Eq. 2 also approaches to Shannon’s entropy as r→1.

For PεΔN, Shannon (1948) measure of information is defined as:

Image for - R-Norm Shannon-Gibbs Type Inequality
(3)

The measure (Eq. 3) has been generalized by various authors and has found applications in various disciplines such as economics, accounting, crime and physics etc.

For P, QεΔN, Kerridge (1961) introduced a quantity known as inaccuracy defined as:

Image for - R-Norm Shannon-Gibbs Type Inequality
(4)

There is well known relation between H (P) and H (P; Q) which is given by:

Image for - R-Norm Shannon-Gibbs Type Inequality
(5)

The Eq. 5 is known as Shannon inequality and its importance is well known in coding theory.

In the literature of information theory, there are many approaches to extend the Eq. 5 for other measures. Nath and Mittal (1973) extended the relation (5) in the case of entropy of type β.

Using the method of Nath and Mittal (1973), Van der Lubbe (1978) generalized (Eq. 5) in the case of Renyi's entropy. On the other hand, using the method of Campbell (1965), generalized (Eq. 5) for the case of entropy of type β. Using these generalizations, coding theorems are proved by these authors for these measures.

The mathematical theory of information is usually interested in measuring quantities related to the concept of information. Shannon (1948) fundamental concept of entropy has been used in different directions by the different authors such as Zheng et al. (2008), Haouas et al. (2008), Yan and Zheng (2009), Kumar and Choudhary (2011) and Wang (2011) etc.

The objective of this study is to study generalization of Eq. 5 for Eq. 1 and three different kinds of R-Norm inaccuracies with the help of Shisha (1967) Holder’s inequality.

GENERALIZATION OF SHANNON INEQUALITY

R-Norm inaccuracies: The three different kinds of R-Norm inaccuracy measures are defined as:

Image for - R-Norm Shannon-Gibbs Type Inequality
(6)

α = 1, 2 and 3, where:

Image for - R-Norm Shannon-Gibbs Type Inequality

Now we are interested to extend the result of Eq. 3 in a fashion such as:

Image for - R-Norm Shannon-Gibbs Type Inequality
(7)

where, α = 1, 2 and 3.

Provided the following conditions holds.

Image for - R-Norm Shannon-Gibbs Type Inequality
(8)

Image for - R-Norm Shannon-Gibbs Type Inequality
(9)

Image for - R-Norm Shannon-Gibbs Type Inequality
(10)

Equality in Eq. 7 holds if and only if P = Q i.e., pi = qi for α = 1 and 3.

And:

Image for - R-Norm Shannon-Gibbs Type Inequality

where, PR is given as:

Image for - R-Norm Shannon-Gibbs Type Inequality
(11)

Since HR (P) ≠ αHR (P; Q), we will not interpret Eq. 6 as a measure of inaccuracy. But αHR (P; Q) is a generalization of the measure of inaccuracy defined in Eq. 1. In spite of the fact that αHR (P; Q) is not a measure of inaccuracy in its usual sense, its study is justified because it leads to meaningful new measures of length. In the following theorem, we will determine a relation between Eq. 1 and Eq. 6 of the type Eq. 5.

Since Eq. 6 is not a measure of inaccuracy in its usual sense, we will call the generalized relation as pseudo-generalization of the Shannon inequality for R-Norm entropy.

Theorem 1: We have:

Image for - R-Norm Shannon-Gibbs Type Inequality
(12)

i.e., Eq. 7 and under the condition Eq. 8, 9 and 10.

Proof: For different values of α = 1, 2 and 3, we shall prove Eq. 12.

Case 1: For α = 1,

Proposition 1:

Image for - R-Norm Shannon-Gibbs Type Inequality
(13)

then

Image for - R-Norm Shannon-Gibbs Type Inequality
(14)

with equality iff P = Q i.e.,qi = pi:

Proof: We use Shisha (1967) Holder’s inequality:

Image for - R-Norm Shannon-Gibbs Type Inequality
(15)

for all xi≥0, yi≥0, i = 1,2,...,N when P<1 (≠1) and p-1 + q-1 = 1, with equality if and only if there exists a positive number c such that:

Image for - R-Norm Shannon-Gibbs Type Inequality
(16)

Setting:

Image for - R-Norm Shannon-Gibbs Type Inequality

in Eq. 15 and using Eq. 13, we get:

Image for - R-Norm Shannon-Gibbs Type Inequality
(17)

Multiplying both sides of Eq. 17 by:

Image for - R-Norm Shannon-Gibbs Type Inequality

and raising power both sides by 1/R, we get:

Image for - R-Norm Shannon-Gibbs Type Inequality

Simplification for

Image for - R-Norm Shannon-Gibbs Type Inequality

as R>1, gives Eq. 14.

For 0<R<1, we can prove Eq. 14 on the similar lines.

Case II: For α = 2,

Proposition 2:

Image for - R-Norm Shannon-Gibbs Type Inequality
(18)

then

Image for - R-Norm Shannon-Gibbs Type Inequality
(19)

with equality iff Q = PR i.e.,:

Image for - R-Norm Shannon-Gibbs Type Inequality

Proof: In inequality Eq. 15 take:

Image for - R-Norm Shannon-Gibbs Type Inequality

we get:

Image for - R-Norm Shannon-Gibbs Type Inequality
(20)

From Eq. 18 and 20, we have:

Image for - R-Norm Shannon-Gibbs Type Inequality

i.e.,

Image for - R-Norm Shannon-Gibbs Type Inequality
(21)

with equality iff

Image for - R-Norm Shannon-Gibbs Type Inequality

∀i = 1,2,..., N.

Raising both sides of Eq. 21 by 1/R, simplification for

Image for - R-Norm Shannon-Gibbs Type Inequality

as R>1, gives Eq. 19.

For:

Image for - R-Norm Shannon-Gibbs Type Inequality

as 0<R<1, gives Eq. 19. i.e., HR (P)≤2HR (P; Q), R>0 (≠1).

Case III: For α = 3,

Proposition 3:

Image for - R-Norm Shannon-Gibbs Type Inequality
(22)

then:

Image for - R-Norm Shannon-Gibbs Type Inequality
(23)

with equality iff

Image for - R-Norm Shannon-Gibbs Type Inequality

Proof: In inequality Eq. 15 take

Image for - R-Norm Shannon-Gibbs Type Inequality

we get:

Image for - R-Norm Shannon-Gibbs Type Inequality
(24)

with equality iff pi = qi, ∀i = 1, 2,..., N

From Eq. 22 and 24, we have:

Image for - R-Norm Shannon-Gibbs Type Inequality

i.e.,

Image for - R-Norm Shannon-Gibbs Type Inequality
(25)

with equality iff pi = qi, ∀ i = 1, 2,..., N.

Raising both sides of Eq. 25 by 1/R, simplification for:

Image for - R-Norm Shannon-Gibbs Type Inequality

as R>1, gives Eq. 23.

For:

Image for - R-Norm Shannon-Gibbs Type Inequality

as 0<R<1, gives Eq. 23.

i.e., HR (p)≤3HR (P; Q), R>0 (≠1)

From proposition 1, 2 and 3, we get the proof of the theorem 1.

Remark:

If R→1; Eq. 7 becomes 5. i.e., H (p)≤H (P; Q), which is Shannon inequality

CONCLUSION

The measure HR (P) and αHR (P; Q) (α = 1, 2) are one parametric generalizations of Shannon entropy and of Kerridge inaccuracy respectively, both studied by Van der Lubbe (1981). The measure αHR (P; Q); (α = 1, 2 and 3) are three different one parametric generalizations of Kerridge (1961) inaccuracy. Proposition 1, 2 and 3 gives inequalities among these measures which we called generalized Shannon inequalities.

REFERENCES
1:  Boekee, D.E. and J.C.A. van der Lubbe, 1980. The R-norm information measure. Inform. Control, 45: 136-155.
CrossRef  |  Direct Link  |  

2:  Campbell, L.L., 1965. A coding theorem and renyi's entropy. Inform. Control, 8: 423-429.
CrossRef  |  

3:  Daroczy, Z., 1970. Generalized information function. Inform. Control, 16: 36-36.

4:  Haouas, A., B. Djebbar and R. Mekki, 2008. A topological representation of information: A heuristic study. J. Applied Sci., 8: 3743-3747.
CrossRef  |  Direct Link  |  

5:  Havrda, J.F. and F. Charvat, 1967. Qualification method of classification process, the concept of structural α-entropy. Kybernetika, 3: 30-35.

6:  Kerridge, D.F., 1961. Inaccuracy and inference. J. R. Stat. Soc. Ser. B, 23: 184-194.

7:  Kumar, S. and A. Choudhary, 2011. A coding theorem for the information measure of order α and of type β. Asian J. Math. Stat., 4: 81-89.
CrossRef  |  Direct Link  |  

8:  Nath, P. and D.P. Mittal, 1973. A generalization of Shannon`s inequality and its application in coding theory. Inform. Control, 23: 439-445.

9:  Renyi, A., 1961. On measure of entropy and information. Proc. 4th Berkeley Symp. Maths. Stat. Prob., 1: 547-561.
Direct Link  |  

10:  Shannon, C.E., 1948. A mathematical theory of communication. Bell Syst. Tech. J., 27: 379-423.
Direct Link  |  

11:  Shisha, O., 1967. Inequalities. Academic Press, New York.

12:  Van der Lubbe, J.C.A., 1978. On certain coding theorems for the information of order α and of type β. Proceedings of the Transactions 8th Prague Conference Information Theory Statist. December 1978, Prague, pp: 253-266.

13:  Van der Lubbe, J.C.A., 1981. A generalized probabilistic theory of the measurement of certainty and information. Ph.D. Thesis, Delft University of Technology, Netherlands.

14:  Wang, Y., 2011. Generalized Information theory: A review and outlook. Inform. Technol. J., 10: 461-469.
CrossRef  |  Direct Link  |  

15:  Yan, R. and Q. Zheng, 2009. Using renyi cross entropy to analyze traffic matrix and detect DDoS attacks. Inform. Technol. J., 8: 1180-1188.
CrossRef  |  Direct Link  |  

16:  Zheng, Y., Z. Qin, L. Shao and X. Hou, 2008. A novel objective image quality metric for image fusion based on renyi entropy. Inform. Technol. J., 7: 930-935.
CrossRef  |  Direct Link  |  

17:  Arimoto, S., 1971. Information-theoretic considerations on estimation problems. Inform. Control, 19: 181-194.
CrossRef  |  

©  2021 Science Alert. All Rights Reserved