**INTRODUCTION**

Throughout the paper **N** denotes the set of the natural numbers and for
Nε**N** we set:

In case there is no rise to misunderstanding we write PεΔ_{N}
instead of (p_{1},...,p_{N}) εΔ_{N}.

In case N ε **N** the well-known Shannon entropy is defined by:

where, the convention 0 log (0) = 0 is adapted (Shannon,
1948).

Throughout this study, ∑ will stand for
unless otherwise stated and logarithms are taken to the base D (D>1).

Let a finite set of N input symbols:

be encoded using alphabet of D symbols, then it has been shown by Feinstein
(1956) that there is a uniquely decipherable code with lengths n_{1},
n_{2},...,n_{N} if and only if the Kraft inequality holds that
is:

where, D is the size of code alphabet.

Furthermore, if:

is the average codeword length, then for a code satisfying Eq.
2, the inequality:

is also fulfilled and equality holds if and only if:

and that by suitable encoded into words of long sequences, the average length
can be made arbitrarily close to H (P), (Feinstein, 1956).
This is Shannon's noiseless coding theorem.

By considering Renyi's entropy (Renyi, 1961), a coding
theorem and analogous to the above noiseless coding theorem has been established
by Campbell (1965) and the authors obtained bounds for
it in terms of:

Kieffer (1979) defined a class rules and showed H_{α}
(P) is the best decision rule for deciding which of the two sources can be coded
with expected cost of sequences of length n when n → ∞, where the
cost of encoding a sequence is assumed to be a function of length only. Further,
in Jelinek (1980) it is shown that coding with respect
to Campbell (1965) mean length is useful in minimizing
the problem of buffer overflow which occurs when the source symbol is produced
at a fixed rate and the codewords are stored temporarily in a finite buffer.

Hooda and Bhaker (1997) consider the following generalization
of Campbell's mean length:

and proved:

under the condition:

where,
is generalized entropy of order α = 1/1+t and type β studied by Aczel
and Daroczy (1963) and Kapur (1967). It may be seen
that the mean codeword length Eq. 3 had been generalized parametrically
and their bounds had been studied in terms of generalized measures of entropies.
Here, we give another generalization of Eq. 3 and study its
bounds in terms of generalized entropy of order α for incomplete power
probability distribution p^{β}.

Generalized coding theorems by considering different information measure under
the condition of unique decipherability were investigated by Khan*
et al*. (2005), Parkash and Sharma (2004), Singh*
et al*. (2003), Hooda and Bhaker (1997) and Pereira
and Dial (1984).

The mathematical theory of information is usually interested in measuring quantities
related to the concept of information. Shannon (1948)
fundamental concept of entropy has been used in different directions by the
different authors such as Stepak (2005), Al-Daoud
(2007), Haouas *et al*. (2008), Rupsys
and Petrauskas (2009), Al-Nasser *et al*. (2010)
and Kumar and Choudhary (2011).

The objective of this paper is to study a coding theorem by considering a new
information measure depending on two parameters. The motivation of this study
among others is that this quantity generalizes some information measures already
existing in the literature such as the Arndt (2001) and
Renyi (1961) entropy which is used in physics.

**CODING THEOREM**

**Definition:** Let N ε **N** be arbitrarily fixed, α,β>0,
α≠1 be given real numbers. Then the information measure
: Δ_{N} → R is defined by:

It also studied by Roy (1976).

**REMARKS**

• |
When β = 1, the information measure
reduces to Renyi (1961) entropy |

• |
When β = 1 and α → 1, then the information measure
reduces to Shannon (1948) entropy |

• |
When α → 1, the information measure
is the entropy of the β-power distribution, that was considered e.g.
in Mitter and Mathur (1972) |

**Definition:** Let N ε **N**, α, β > 0, α ≠
1 be arbitrarily fixed, then the mean length
corresponding to the generalized information measure
is given by the formula:

where (p_{1},..., p_{N}) and D, n_{1}, n_{2},...,n_{N}
are positive integers so that:

Also, we have used the condition Eq. 8 to find the bounds.
It may be seen that the case β = 1 inequality Eq. 8 reduces
to Eq. 2.

We establish a result, that in a sense, provides a characterization of under
the condition of Eq. 8.

**Measure of average codeword length of order α for incomplete power
probability distribution p**^{β}

**Definition:** Let us define the average code length:

where, f is a non constant positive valued, continuous function defined on
(0, 1], φ is a strictly monotonically increasing and continuous real valued
function defined on [1, ∞) such that φ^{-1} exists. From
Eq. 9, it is clear that we are defining average code length
as a most general quasilinear mean value rather than ordinary average.

The basic requirement which any measure of average code word length should
satisfy is that it should be translative for all positive integers mεN^{+},
where, N^{+} denotes the set of positive integers. In other words, if
the length of each code word is increased by a positive integer mεN^{+}
by attaching to the right, sequence of length m constructed by using letter
of code alphabet A, then the average code length must increased by m. Thus,
we get the functional equation:

Equation 10 is known as translative equation following Aczel
(1974), the following theorem can be proved easily.

**Theorem 1:** The only quasilinear measure Z (P, ,
f, φ) of average code length which are translative ∀mεN^{+}
are:

Where, φ_{1} (x) = bx + α, b > 0 and φ_{α}
(x) = bD^{(α-1)x} + α, D>1, b>0.

Now, our object is to connect Eq. 12 to entropy of order
α for the incomplete power distribution P^{β}, we write Eq.
12 as:

we now restrict to α > 0 (≠1) and choose f (p_{i}) in such
a way that the last term on the R.H.S. in Eq. 13 becomes
Eq. 6:

Equation 14 gives rise to the functional equation:

The only non-constant continuous solution of Eq. 15 refer
also to Aczel (1966) are of the form:

Hence:

Where:

**REMARKS**

• |
When β = 1, then Eq. 18 reduces to
average code word length studied by Nath (1975) |

• |
When β = 1 and α → 1, then Eq. 18 reduces
to average codeword length studied by Shannon (1948) |

• |
When α → 1 and β ≠ 1, Eq. 18 reduces
to |

In the following theorem, we give a relation between
and (p_{1},...,
p_{N})

**Theorem 2:** Let there be a discrete memoryless source emitting messages
x_{1},x_{2},...,x_{N} with probabilities P = (p_{1},
p_{2},...,p_{N}) and ∑p_{i} = 1, if the messages
are encoded in a uniquely decipherable way and n_{i} denote the length
of code word assigned to the message x_{i} then:

holds with equality iff are of the form:

Moreover:

**Proof:** By Shisha (1967) Holder’s inequality,
we have:

where, p^{-1}+q^{-1} = 1; p (≠ 0) < 1, q < 0 or q
(≠ 0) < 1, p < 0; x_{i}, y_{i} > 0 for each i.

Making the substitutions

in Eq. 22, we get:

using the condition Eq. 8, we get:

it implies:

we obtain the result Eq. 19 after simplification as 0 <
α < 1.

i.e., .

Similarly, we can prove Eq. 19 for α > 1.

It can be proved that there is equality in Eq. 19 if and
only if:

or

We choose the codeword lengths n_{i} as integers satisfying:

From the left inequality of Eq. 25, we have:

taking sum over i, we get the generalized inequality Eq. 8,
So there exists a generalized personal code with length n_{i}'s.

Let 0< α<1, then Eq. 25 can be written as:

Multiplying Eq. 27 by
throughout, summing over i and we obtain the result Eq. 21
after simplification for 1/α-1 as 0< α<1.

which gives Eq. 21.

Similarly, we can prove Eq. 21 for α > 1.