Subscribe Now Subscribe Today
Research Article

Adaptive Quantum Lossless Compression

Essam Al-Daoud
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail

A new adaptive quantum lossless compression algorithm is introduced, the suggested algorithm does not need a priori estimation of probabilities and it is more useful in real applications. The main idea of the proposed algorithm is that the corresponding probabilities of the symbols are assumed to be identical and after each iteration; all the corresponding probabilities are updated. These probabilities will become closer to the actual distribution after few iterations. Moreover the complexity of the proposed algorithm can be reduced if we consider the general properties of the given data

Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

  How to cite this article:

Essam Al-Daoud , 2007. Adaptive Quantum Lossless Compression. Journal of Applied Sciences, 7: 3567-3571.

DOI: 10.3923/jas.2007.3567.3571



Quantum computer is any device for computation that makes direct use of distinctively quantum mechanical phenomena. In a quantum computer, the data is measured by qubits (quantum bits). There has been much recent interest in the subject of quantum information processing. Quantum information is a natural generalization of classical information. It is based on quantum mechanics, a well-tested scientific theory in real experiments. Although quantum computation and communication are still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Research in both theoretical and practical areas continues at a frantic pace and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes (Panthong et al., 2005). D-Wave Systems demonstrated on February 13 and 15th 2007 what is claimed to be the worlds first commercial quantum computer by using 16 qubits (

Quantum information differs from classical information in several respects such as the states teleportation, the states cannot generally be read or duplicated without disturbance (no cloning theorem), one state can exist in superposition of all possible states at once and there are statistical correlations predicted by quantum physics for measurements on two entangled particle systems. The ability to manipulate quantum information enables us to perform tasks that would be unachievable in a classical context, such as unconditionally secure transmission of information, quantum authentication, quantum digital signature and solving the hard problems in polynomial time (Al-Daoud, 2007).

It may be very advantageous to decrease, where possible by compression methods, the number of qubits used for quantum communication and storage. This study introduces a new compression method without using the statistical distribution of the given sequence of the qubits and analogy to the adaptive Huffman code.


Information theory is generally considered to have been founded in 1948 by Claude Shannon in his seminal work, A Mathematical Theory of Communication. He established the two core results of classical information theory in his landmark. The two central problems that he solved were: the amount of the compression done on a message and the necessary rate communicated reliably over a noisy channel. Both problems concern redundancy. Shannon introduced a new entropy definition in the theory of information, it is of the form (Shannon, 1948):

pi = The probability.

Applications of fundamental topics of information theory include lossless data compression (e.g., ZIP files), lossy data compression (e.g., MP3s, MPEG and JPEG) and channel coding (e.g., for DSL lines).

The above entropy definition can be used to determine the theoretical lossless compression lower bound or the compression rate. The compression rate is the ratio between the length of an uncompressed string and the length of the compressed (binary) string. There are two types of the universal data compression in classical domain: the first type is two pass compression algorithms such as Run-length Encoding, Huffman coding and Arithmetic coding. The second type is one pass compression algorithms such as Adaptive Huffman coding, LZ77 and LZ78. Ziv and Lempel present a simple linear time lossless compression algorithm having an asymptotic compression rate approaching the sources entropy; that is allowing a string of length n to be losslessly compressed to a bit string of length asymptotic approaching H(p) n for large n. In the first pass, they use a parsing scheme to encode the source string into unique prefixes.


Quantum lossless compression is one of the important directions of the quantum information processing which starts from the thermodynamic entropy. Gibbs defined The thermodynamic entropy S after earlier work by Boltzmann as follows:

Assume that the underlying ensemble is Σ = {P, X}, where, X is the set of all symbols X = {|Ψ1>, |Ψ2>,…|Ψn>} and P is the set of corresponding probabilities, hence the Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy formula (Jozsa et al., 1998):

ρ = The density matrix of the quantum mechanical system defined as follows:

Benjamin Schumacher is a US theoretical physicist, working mostly in the field of quantum information theory. He discovered a way of interpreting quantum states as information. He came up with a way of compressing the information in a state and storing the information in a smaller number of states. This is now known as Schumacher compression. This was the quantum analog of Shannon's noiseless coding theorem and it helped to start the field known as quantum information theory.

Schumacher quantum lossless compression algorithm can be described as follows:

Select a typical sub message |Ψtyp> and ignore the rest of the message, where the typical sub messages is in the subspace that spanned by {|Ψ1>, |Ψ2>,…|Ψv>} and v = 2nS.
Apply a unitary change of basis U that takes |Ψtyp> to a state of the form:
Send |Ψcomp

Schumacher decompression can be done by appending the zeros to |Ψcomp> and applying U-1. Moreover Schumacher proves that the compressed qubits equals to m(S+δ),where, m is the length of typical sub message (Schumacher, 1995).

Braunstein and others introduce a quantum analog of Huffman coding by using divide and conquer. Firstly, they divide the messages into pairs and apply a merging procedure to each pair. The merging effectively reduces the total number of messages from 2r to 2r-1. This process can be repeated. Therefore, after r applications of the merging procedure, we obtain a single tape containing all the messages (in addition to the various length tapes containing the length information). Let us introduce a message tape, for simplicity, we simply denote |0 • • • 0hi> by |h1>, etc.

In general, at the end the encoder obtains:

The encoder truncates the message tape: he keeps the first N(L+δ) qubit in the message tape (Braunstein et al., 2002).

Bostroem and Felbinger (2002) develop a general framework for variable-length quantum messages in close analogy to the classical case. They show that the lossless compression of an ensemble of messages is bounded from below by its von-Neumann entropy and it is possible to reduce the number of qubits passing through a quantum channel even below the von Neumann entropy by adding a classical side channel. they give an explicit communication protocol that realizes lossless and instantaneous quantum data compression.

Bennett gave a constructive method for doing Schumacher compression. He observed that the Schumacher compression can be done by a unitary mapping to a basis for which the density matrix ñ is diagonal followed by certain combinatorial. We can perform the combinatorial by ordering the basis states first by the number of ones (from smallest to largest) that are in the binary expansion of the bits and then refining this order by a lexical sort of the binary expansion of the bits (Reif and Chakraborty, 2007).


Assume that Alice likes to send a stream of compressed symbols (characters) to Bob through a quantum channel, the symbols before the compressing can be written as M = |x1>|x2>…|xm> where, |xi>εX, ∀ i = 1, 2..., m and m is the message length. Alice does not know what is the coming symbol, thus she does not know the probability distribution of the message. The following steps can be used to compress a stream of data [some notations and steps are borrowed from (Bostroem and Felbinge, 2002)]:

1. Alice assumes that all the symbols (characters) have identical probability, i.e., Alice has the ensemble Σ = {P, X}, where, X is the set of all symbols X = {|Ψ1>, |Ψ2>,…|Ψn}, P is the set of corresponding probabilities which is initially equals to {1/n, 1/n,…1/n} and n is the number of symbols. Moreover each symbol can be represented by r qubit.

2. j = 1, Counteri = 0, ∀ I=1, 2..., n.

3. Alice prepares the subset L = {|Ψ1>,|Ψ2>,…|Ψd} ⊆ X of linear independent vectors, where these vectors are selected and ordered by the highest probability.

4. Alice finds the orthornormal vectors B={|w1>,|w2>,…,|wd>} by performing a Gram- Schmidt orthornormalization on the list L. thus the set B can be defined as follows:

5. Alice calculates the unitary matrix as follows:

where, the state |Z(i-1)> must be represented in a neutral-prefix, which means that the number of qubit required to represent the state |Z(i-1)> is equal to the number of qubit required to represent the longest symbol(character). Zeros add to the left, for example if the longest symbol needs 5 qubit then |Z(3)>=|00011>.

6. Alice picks up the next symbol and encodes it as follows:

|c(y)>=C|y>, where, y is the picked symbol

7. Alice calculates the base length L of |c(y)>, where, L is the longest component of the state |c(y)> as defined by (Bostroem and Felbinger, 2002).

8. Alice truncates the message to L qubits by removing r-L leading qubits.

9. Alice sends the truncated message through a quantum channel and sends its base length L through a classical channel (adaptive huffman can be used to encode the bases length).

10. j = j+1, if j>m then stop.

11. Countert =Countert +1, where, t indicates the tth symbol in the set X such that |Ψ1>=|y>.

12. If Countert =1 then go to step 6

13. If Countert>1 then update the set of corresponding probabilities as follows:

14. Go to step 3.

Bob receives the stream of base length from the classical channel and the stream of compressed symbols from the quantum channel and decompresses the message by using the following steps:

1-5. Bob performs the steps 1-5 as Article does.
6 Bob decompresses the base length L of the next symbol.
7. Bob adds (r-L) zeros to the right of the received state from the quantum channel, call it |γ>

Bob decodes (decompress) the state |γ> as follows:

9. j = j+1, if j>m then stop.
10. Countert = Countert +1, where, t is indicate to the tth symbol in the set X such that |Ψt>=|y>
11. If Countert =1 then go to step 6
12. If Countert >1 then update the set of corresponding probabilities as follows:

13. Go to step 3.

We can reduce the complexity of the above algorithm if we consider that the corresponding probabilities will become closer to the actual distribution after few iterations. Thus we can ignore the recalculation of the unitary matrix and jump directly to the step 6, or the calculation can be postponed until critical changing is occurred in the set of the corresponding probabilities.


Let us first discuss an explicit example to demonstrate the steps of the suggested algorithm. Assume that Alices source message set is:

whose elements are in the mixed states and given by:

Now consider that Alice’s message is M =| eebbd> =|e>|e>|b>|b>|d> and she likes to send it through a quantum channel. Thus Alices linear independent vectors are:

and the orthornormal vectors B = {|w1>,|w2>,| w3>,|w4>} where:

Hence the first unitary matrix is:

Now Alice can compress the first character (which consists from two qubits) as follows:

Alice truncates the state to the base length L = 1 qubit and obtains 0.8281|0>+ 0.2845|1>. She sends the left qubit through the quantum channel. Bob can decompress the received data by calculating C1, adding a qubit in the state |0> to the right of the received qubit and applying:

To compress the second character, Alice updates the set of corresponding probabilities and finds the new linear independent vectors, where these vectors are selected and ordered by the highest probability as follows:

Alice calculates C2 and compresses the second character as follows:

Alice truncates the state to the base length L = 0 qubit. In this case there are no qubits left at all, so she sends nothing through the quantum channel and sends ‘0’ through the classical channel (note that: the classical bits are cheaper than the quantum qubits, where, n qubits contain information equivalent to 2n bits (Lee et al., 2002)). Bob receives the classical information 0. In this case he has to prepare two qubits in the state |00> and apply the decoder C2t although Alice dropped the coefficient 0.8756, Bob can find the correct character by comparing and scaling the obtained vector. Alice can compress the rest of her message by applying the same process

Table 1 shows that where random data and real application data are used. The proposed algorithm is coded in MatlabTM 7.0 and is run in a PC with Pentium 4 microprocessor, 2.6 GH and 256 MB RAM.

Table 1: The numerical simulation of the new algorithm

The compression ratio is calculated with the division of compressed size by uncompressed size *100. So, lower is better.

The suggested algorithm is the first quantum lossless compression algorithm that works without priori estimation of probabilities; all the previous algorithms require statistical knowledge. Hence the previous algorithms require collecting the whole data before the compression process begins. In the real applications the suggested algorithm works better because some symbols tend to be repeated and the whole data are often not available.


The previous quantum compression lossless algorithms require the statistical knowledge which is often not available such as live audio and video. Even when the data is available, some quantum application does not allow performing the measurement more than one time. The first adaptive quantum compression algorithm without loss of information is introduced, where the source message is not known to the sender, the suggested protocol can be used for both online quantum communication and storage of quantum data.

1:  Al-Daoud, E., 2007. Unconditionally secure quantum payment system. Int. J. Applied Math. Comput. Sci., 4: 566-569.

2:  Bostroem, K. and T. Felbinge, 2002. Lossless quantum data compression and variable length coding. Phys. Rev. Lett. A., 65: 032313.1-032313.15.
Direct Link  |  

3:  Braunstein, S.L., C.A. Fuchs, D. Gottesmann and H.K. Lo, 2000. A quantum analog of huffman coding. IEEE Trans. Inform. Theor., 46: 1644-1649.
Direct Link  |  

4:  Jozsa, R., M. Horodecki, P. Horodecki and R. Horodecki, 1998. Universal quantum information compression. Phys. Rev. Lett., 81: 1714-1717.
Direct Link  |  

5:  Lee, H., D. Ahn and S.W. Hwang, 2002. Dense coding in entangled states. Phys. Rev. A., 66: 024304.1-024304.2.
Direct Link  |  

6:  Panthong, P., 2005. Experimental free space quantum key distribution. Proceedings of the 4th International Conference on Optical Communication and Networks, December 14-16, 2005, Bangkok, Thailand, pp: 159-161.

7:  Reif, J.H. and S. Chakraborty, 2007. Efficient and exact quantum compression. J. Inform. Comput., 205: 967-981.
Direct Link  |  

8:  Schumacher, B., 1995. Quantum coding. Phys. Rev. Lett. A., 51: 2738-2747.
Direct Link  |  

9:  Shannon, C.E., 1948. A mathematical theory of communication. Bell Syst. Tech. J., 27: 379-423.
Direct Link  |  

©  2021 Science Alert. All Rights Reserved