INTRODUCTION
Quantum computer is any device for computation that makes direct use of distinctively quantum mechanical phenomena. In a quantum computer, the data is measured by qubits (quantum bits). There has been much recent interest in the subject of quantum information processing. Quantum information is a natural generalization of classical information. It is based on quantum mechanics, a welltested scientific theory in real experiments. Although quantum computation and communication are still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Research in both theoretical and practical areas continues at a frantic pace and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes (Panthong et al., 2005). DWave Systems demonstrated on February 13 and 15th 2007 what is claimed to be the worlds first commercial quantum computer by using 16 qubits (www.dwavesys.com).
Quantum information differs from classical information in several respects such as the states teleportation, the states cannot generally be read or duplicated without disturbance (no cloning theorem), one state can exist in superposition of all possible states at once and there are statistical correlations predicted by quantum physics for measurements on two entangled particle systems. The ability to manipulate quantum information enables us to perform tasks that would be unachievable in a classical context, such as unconditionally secure transmission of information, quantum authentication, quantum digital signature and solving the hard problems in polynomial time (AlDaoud, 2007).
It may be very advantageous to decrease, where possible by compression methods, the number of qubits used for quantum communication and storage. This study introduces a new compression method without using the statistical distribution of the given sequence of the qubits and analogy to the adaptive Huffman code.
CLASSICAL LOSSLESS COMPRESSION
Information theory is generally considered to have been founded in 1948 by
Claude Shannon in his seminal work, A Mathematical Theory of Communication.
He established the two core results of classical information theory in his landmark.
The two central problems that he solved were: the amount of the compression
done on a message and the necessary rate communicated reliably over a noisy
channel. Both problems concern redundancy. Shannon introduced a new entropy
definition in the theory of information, it is of the form (Shannon, 1948):
where:
Applications of fundamental topics of information theory include lossless data compression (e.g., ZIP files), lossy data compression (e.g., MP3s, MPEG and JPEG) and channel coding (e.g., for DSL lines).
The above entropy definition can be used to determine the theoretical lossless
compression lower bound or the compression rate. The compression rate is the
ratio between the length of an uncompressed string and the length of the compressed
(binary) string. There are two types of the universal data compression in classical
domain: the first type is two pass compression algorithms such as Runlength
Encoding, Huffman coding and Arithmetic coding. The second type is one pass
compression algorithms such as Adaptive Huffman coding, LZ77 and LZ78. Ziv and
Lempel present a simple linear time lossless compression algorithm having an
asymptotic compression rate approaching the sources entropy; that is allowing
a string of length n to be losslessly compressed to a bit string of length asymptotic
approaching H(p) n for large n. In the first pass, they use a parsing scheme
to encode the source string into unique prefixes.
QUANTUM LOSSLESS COMPRESSION
Quantum lossless compression is one of the important directions of the quantum
information processing which starts from the thermodynamic entropy. Gibbs defined
The thermodynamic entropy S after earlier work by Boltzmann as follows:
Assume that the underlying ensemble is Σ = {P, X}, where, X is the set
of all symbols X = {Ψ_{1}>, Ψ_{2}>,…Ψ_{n}>}
and P is the set of corresponding probabilities, hence the Gibbs entropy
translates over almost unchanged into the world of quantum physics to give the
von Neumann entropy formula (Jozsa et al., 1998):
where:
ρ 
= 
The density matrix of the quantum mechanical system defined
as follows: 
Benjamin Schumacher is a US theoretical physicist, working mostly in the field of quantum information theory. He discovered a way of interpreting quantum states as information. He came up with a way of compressing the information in a state and storing the information in a smaller number of states. This is now known as Schumacher compression. This was the quantum analog of Shannon's noiseless coding theorem and it helped to start the field known as quantum information theory.
Schumacher quantum lossless compression algorithm can be described as follows:
• 
Select a typical sub message Ψ_{typ}> and
ignore the rest of the message, where the typical sub messages is in the
subspace that spanned by {Ψ_{1}>, Ψ_{2}>,…Ψv>}
and v = 2^{nS}. 
• 
Apply a unitary change of basis U that takes Ψ_{typ}>
to a state of the form:

• 
Send Ψ_{comp} 
Schumacher decompression can be done by appending the zeros to Ψ_{comp}> and applying U^{1}. Moreover Schumacher proves that the compressed qubits equals to m(S+δ),where, m is the length of typical sub message (Schumacher, 1995).
Braunstein and others introduce a quantum analog of Huffman coding by using
divide and conquer. Firstly, they divide the messages into pairs and apply a
merging procedure to each pair. The merging effectively reduces the total number
of messages from 2^{r} to 2^{r1}. This process can be repeated.
Therefore, after r applications of the merging procedure, we obtain a single
tape containing all the messages (in addition to the various length tapes containing
the length information). Let us introduce a message tape, for simplicity, we
simply denote 0 • • • 0h_{i}> by h_{1}>,
etc.
In general, at the end the encoder obtains:
The encoder truncates the message tape: he keeps the first N(L+δ) qubit in the message tape (Braunstein et al., 2002).
Bostroem and Felbinger (2002) develop a general framework for variablelength quantum messages in close analogy to the classical case. They show that the lossless compression of an ensemble of messages is bounded from below by its vonNeumann entropy and it is possible to reduce the number of qubits passing through a quantum channel even below the von Neumann entropy by adding a classical side channel. they give an explicit communication protocol that realizes lossless and instantaneous quantum data compression.
Bennett gave a constructive method for doing Schumacher compression. He observed that the Schumacher compression can be done by a unitary mapping to a basis for which the density matrix ñ is diagonal followed by certain combinatorial. We can perform the combinatorial by ordering the basis states first by the number of ones (from smallest to largest) that are in the binary expansion of the bits and then refining this order by a lexical sort of the binary expansion of the bits (Reif and Chakraborty, 2007).
THE PROPOSED ALGORITHM
Assume that Alice likes to send a stream of compressed symbols (characters)
to Bob through a quantum channel, the symbols before the compressing can be
written as M = x_{1}>x_{2}>…x_{m}>
where, x_{i}>εX, ∀ i = 1, 2..., m and m is the
message length. Alice does not know what is the coming symbol, thus she does
not know the probability distribution of the message. The following steps can
be used to compress a stream of data [some notations and steps are borrowed
from (Bostroem and Felbinge, 2002)]:
1. Alice assumes that all the symbols (characters) have identical
probability, i.e., Alice has the ensemble Σ = {P, X}, where, X is the
set of all symbols X = {Ψ_{1}>, Ψ_{2}>,…Ψ_{n}},
P is the set of corresponding probabilities which is initially equals to
{1/n, 1/n,…1/n} and n is the number of symbols. Moreover each symbol
can be represented by r qubit.
2. j = 1, Counter_{i} = 0, ∀ I=1, 2..., n.
3. Alice prepares the subset L = {Ψ_{1}>,Ψ_{2}>,…Ψ_{d}}
⊆ X of linear independent vectors, where these vectors are selected
and ordered by the highest probability.
4. Alice finds the orthornormal vectors B={w_{1}>,w_{2}>,…,w_{d}>}
by performing a Gram Schmidt orthornormalization on the list L. thus
the set B can be defined as follows:
5. Alice calculates the unitary matrix as follows:
where, the state Z(i1)> must be represented in a neutralprefix,
which means that the number of qubit required to represent the state Z(i1)>
is equal to the number of qubit required to represent the longest symbol(character).
Zeros add to the left, for example if the longest symbol needs 5 qubit
then Z(3)>=00011>.
6. Alice picks up the next symbol and encodes it as follows:
c(y)>=Cy>, where, y is the picked symbol 
7. Alice calculates the base length L of c(y)>, where, L is the longest
component of the state c(y)> as defined by (Bostroem and Felbinger,
2002).
8. Alice truncates the message to L qubits by removing rL leading
qubits.
9. Alice sends the truncated message through a quantum channel and sends
its base length L through a classical channel (adaptive huffman can be used
to encode the bases length).
10. j = j+1, if j>m then stop.
11. Counter_{t} =Counter_{t} +1, where, t indicates the t^{th} symbol in the set X such that Ψ_{1}>=y>.
12. If Counter_{t} =1 then go to step 6
13. If Counter_{t}>1 then update the set of corresponding probabilities
as follows:
14. Go to step 3.
Bob receives the stream of base length from the classical channel and the stream
of compressed symbols from the quantum channel and decompresses the message
by using the following steps:
15. 
Bob performs the steps 15 as Article does. 
6 
Bob decompresses the base length L of the next symbol. 
7. 
Bob adds (rL) zeros to the right of the received state from the quantum
channel, call it γ> 
8. 
Bob decodes (decompress) the state γ> as follows:

9. 
j = j+1, if j>m then stop. 
10. 
Counter_{t} = Counter_{t} +1, where, t is indicate to
the t^{th} symbol in the set X such that Ψ_{t}>=y> 
11. 
If Counter_{t} =1 then go to step 6 
12. 
If Counter_{t} >1 then update the set of corresponding probabilities
as follows:

13. 
Go to step 3. 
We can reduce the complexity of the above algorithm if we consider that the corresponding probabilities will become closer to the actual distribution after few iterations. Thus we can ignore the recalculation of the unitary matrix and jump directly to the step 6, or the calculation can be postponed until critical changing is occurred in the set of the corresponding probabilities.
THE SIMULATION OF THE PROPOSED ALGORITHM
Let us first discuss an explicit example to demonstrate the steps of the suggested
algorithm. Assume that Alices source message set is:
whose elements are in the mixed states and given by:
Now consider that Alice’s message is M = eebbd> =e>e>b>b>d>
and she likes to send it through a quantum channel. Thus Alices linear independent
vectors are:
and the orthornormal vectors B = {w_{1}>,w_{2}>, w_{3}>,w_{4}>}
where:
Hence the first unitary matrix is:
Now Alice can compress the first character (which consists from two qubits)
as follows:
Alice truncates the state to the base length L = 1 qubit and obtains 0.82810>+
0.28451>. She sends the left qubit through the quantum channel. Bob can
decompress the received data by calculating C_{1}, adding a qubit in
the state 0> to the right of the received qubit and applying:
To compress the second character, Alice updates the set of corresponding probabilities
and finds the new linear independent vectors, where these vectors are selected
and ordered by the highest probability as follows:
Alice calculates C_{2} and compresses the second character as follows:
Alice truncates the state to the base length L = 0 qubit. In this case there
are no qubits left at all, so she sends nothing through the quantum channel
and sends ‘0’ through the classical channel (note that: the classical
bits are cheaper than the quantum qubits, where, n qubits contain information
equivalent to 2^{n} bits (Lee et al., 2002)). Bob receives the
classical information 0. In this case he has to prepare two qubits in the state
00> and apply the decoder C_{2}^{t} although Alice dropped
the coefficient 0.8756, Bob can find the correct character by comparing and
scaling the obtained vector. Alice can compress the rest of her message by applying
the same process
Table 1 shows that where random data and real application
data are used. The proposed algorithm is coded in Matlab^{TM} 7.0 and
is run in a PC with Pentium 4 microprocessor, 2.6 GH and 256 MB RAM.
Table 1: 
The numerical simulation of the new algorithm 

The compression
ratio is calculated with the division of compressed size by uncompressed size
*100. So, lower is better.
The suggested algorithm is the first quantum lossless compression algorithm that works without priori estimation of probabilities; all the previous algorithms require statistical knowledge. Hence the previous algorithms require collecting the whole data before the compression process begins. In the real applications the suggested algorithm works better because some symbols tend to be repeated and the whole data are often not available.
CONCLUSION
The previous quantum compression lossless algorithms require the statistical
knowledge which is often not available such as live audio and video. Even when
the data is available, some quantum application does not allow performing the
measurement more than one time. The first adaptive quantum compression algorithm
without loss of information is introduced, where the source message is not known
to the sender, the suggested protocol can be used for both online quantum communication
and storage of quantum data.