HOME JOURNALS CONTACT

Information Technology Journal

Year: 2008 | Volume: 7 | Issue: 8 | Page No.: 1085-1095
DOI: 10.3923/itj.2008.1085.1095
A Survey on Fractal Image Compression Key Issues
N.A. Koli and M.S. Ali

Abstract: This study is a survey of existing fractal coding methods and approaches, which highlights the solution to the key and open issues in the fractal coding. In addition, the possible approaches from encoding time optimization point of view are discussed. This study can be useful for the beginners in the fractal image compression for gray and color images. Neural network based approaches are also summarized. A fragment of neural network architecture is given to develop the neural network based approach for fractal gray and color image compression. This study may help for the implementation of straight fractal coding and neural network based approach.

Fulltext PDF Fulltext HTML

How to cite this article
N.A. Koli and M.S. Ali, 2008. A Survey on Fractal Image Compression Key Issues. Information Technology Journal, 7: 1085-1095.

Keywords: neural network, Fractal coding, fractal decoding, domain pool optimization and image compression

INTRODUCTION

Image compression is playing key role in the development of various multimedia computer services and telecommunication applications such as teleconferencing; digital broadcast codec and video technology, etc. As image needs a huge amount of data to store it, there is pressing need to limit image data volume for fast transport along communication links. The goal of image compression techniques is to remove redundancy present in data in a way that enables acceptable image reconstruction (Curtis and Martin, 2005). Statistical properties of the image are used in design an appropriate compression technique. The strong correlation between image data items enables reduction in the data contents without significant quality degradation. There are numerous lossy and lossless image compression techniques. Considering X-ray images or Electrocardiogram (ECG) data where each bit of information is essential a lossless compression must be used. On the other hand, for the still digital image or video, which have been already lossy digitalized, a lossy compression is preferred. There are numerous methods for compressing images and each has advantages and disadvantages. One of the lossy compression techniques that have attracted the researchers very much is Fractal coding (Hashemian and Marivada, 2003).

Fractal coding has very good performance and high compression ratios (32 to 1 is not unusual); their use can be limited by the intensive computation required. The basic idea can be described as a self vector quantization, where an image blocks is encoded applying a simple transformation to one of the blocks previously encoded. Transformations frequently used are combinations of scaling, reflections and rotations of another block.

Unlike vector quantization, fractal compressors do not maintain an explicit dictionary and the search can be long and computationally intensive. Conceptually, each encoded block must be transformed in all the possible ways and compared with the current to determine the best match. Due to the fact that the blocks are allowed to have different size, there is very little blocking noise and the perceived quality of the compressed image is usually very good.

The main core of image compression technology consists of three important processing stages: pixel transforms; quantization and entropy coding. In addition to these key techniques, new ideas are constantly appearing across different disciplines and new research fronts (Zhang and Zhao, 2006). As a model to simulate the learning function of human brains, neural networks have enjoyed widespread applications in telecommunication and computer science. Recent publications show a substantial increase in neural networks for image compression and coding. Together with the popular multimedia applications and related products, what role are the well-developed neural networks going to play in this special era where information processing and communication is in great demand? Although there is no sign of significant work on neural networks that can take over the existing technology, research on neural networks of image compression are still making steady advances. This could have a tremendous impact upon the development of new technologies and algorithms in this subject area (Wang and Kao, 2004).

This study presents the survey of fractal image compression. Though, the different surveys (Saupe and Hamzaoui, 1994; Wohlberg and de Jagar, 1999) are available but the survey with basic fractal coding fundamentals and neural network implementation which helps to beginners is not available. In this study, apart from the survey we have discussed the possible approaches using neural network and local search. This study may be useful to the researchers, who are working in the fractal gray level and fractal color image compression.

IMAGE COMPRESSION TECHNIQUES

Generally the image compression techniques are divided into two categories: lossy image compression and lossless image compression.

Lossy image compression: Lossy is a term applied to data compression techniques in which some amount of the original data is lost during the compression process. Lossy image compression applications attempt to eliminate redundant or unnecessary information in terms of what the human eye can perceive. As the amount of data is reduced in the compressed image, the file size is smaller than the original. Lossy image data compression is useful for application to World Wide Web images for quicker transmission across the Internet.

Transform coding: Transform coding algorithms are based on a different principle: they use a transformation that exploits peculiar characteristics of the signal, increasing the performance of an entropy coding scheme (Huffman or arithmetic coding for example).

Wavelet compression: Another compression scheme that adopts transform coding is based on wavelet functions. The basic idea of this coding scheme is to process data at different scale or resolution.

Vector quantization: A dictionary (or codebook) is a collection of small size, statistically relevant patterns (codewords). Every image is encoded dividing it in blocks and assigning to each block the index of the closest codeword in the dictionary. Matching can be exact or approximate, achieving respectively lossless or lossy compression. The codebook is sometimes allowed to change and adapt to the input`s peculiarities. The most famous dictionary method is vector quantization; in its simplest form, it is a lossy compression scheme that uses a static dictionary.

Lossless image compression: Lossless is a term applied to image data compression techniques where very little of the original data is lost. It is typically used by the photographic and print media, where high resolution imagery is required and larger file sizes aren`t a problem. Many image and video compression methods rely upon lossless coding as the final step in the compression process. Here we briefly review standard lossless compression methods that are applied to a one-dimensional stream of data. Standards that specifically address lossless compression of images will be discussed later.

Run-length coding: A simple and common lossless compression technique is Run Length Encoding (or RLE). It is mainly used with documents and faxes, where long sequences of pixels with the same color are present. A series (or run) of pixels having the same value is coded by the pair (R, L) where R is the recurrent and L is the length of the run.

Huffman coding: Usually an alphabet is encoded into binary by associating with each character of a string of a fixed number of bits. For example, the standard ASCII code associates with each of the 128 characters a string of 7 bits. However, any method for encoding characters as variable length bit strings may be acceptable in practice so long as strings are always uniquely decodable; that is, it is always possible to determine where the code for one character ends and the code for the next one begin. Huffman codes are an optimal way of assigning variable length codes to an alphabet with a given probability distribution for each character.

Arithmetic coding: Arithmetic coding is a simple compression method in itself, but also useful for encoding probabilities provided by an adaptive model in predictive methods. The idea is to encode data as a single long binary number that specifies a point on the real line between 0 and 1.

Textual substitution/LZ methods: With textual substitution methods, the encoder and decoder cooperate to maintain identical copies of a dictionary D that is constantly changing. The encoder reads characters from the input stream that form an entry of D, transmits the index of this entry and updates D with some method that depends only on the current contents of D and the current match. Textual substitution methods are often referred to as LZ-type methods due to the work of Lempel and Ziv, LZ1-type or LZ`77-type methods are those based on matching to the preceding n characters and LZ2- type or LZ`78-type are those based on a dynamic dictionary, usually represented by a tree data structure in practice.

Prediction and modeling: Prediction allows a compact representation of data by encoding the difference between the data itself and the information extrapolated or predicted by a model. If the predictor works well, predicted data is very close to the input and the error is small or negligible and so it is easier to encode.

FRACTAL IMAGE COMPRESSION

The basic idea can be described as a self-vector quantization, where an image blocks are encoded applying a simple transformation to one of the blocks previously encoded. Transformations frequently used are combinations of scaling, reflections and rotations of another block.

Barnsley (1988) was first to suggest that storing images as a collection of transformations could lead to image compression. For an exactly self-similar image, the degree of compression obtained is very high and re-construction possible without loss of any information. Applicability of this principle was extended to compress any image (which can be considered as statistically self-similar), by Jacquin (1992). Due to this development, any input image could be compressed. In this approach, a region/portion of the image is searched with the rest of the image in an effort to find a suitable portion, which is similar in a statistical sense.

This technique is basically a search process and consists of the following steps: partitioning the image into sub-images or blocks, search for parts of the images or blocks which are self-similar in a statistical sense and once a match is located, compute the transformations (Jacquin, 1992).

Basically, a fractal code consists of three ingredients: a portioning of the image regions into portions Rk called ranges, an equal number of other image regions Dk, called domains and for each domain-range pair two transformations, a geometric one, Uk:Dk →Rk, which maps the domain to the range and an affine transformation, Vk, that adjusts the intensity value in the domain to those in the range. The collection of transformations may act on an arbitrary image g, producing an output image Tg, which is like a collection of modified copies of the domains of the image g. The iteration of the image operator T is the decoding step, i.e., it yields a sequence of images which converges to an approximation of the encoded image (Fisher, 1992).

Fractal image coding is based on the theory of Iterated Function Systems (IFS) (Fisher, 1992). Fractal image compression uses a special type of IFS called a Partitioned Iterated Function System (PIFS) (Jacquin, 1992). A PIFS consists of a complete metric space X, a collection of sub-domains Di⊂X, I = 1,..., n and a collection of contractive mappings . We consider grayscale images as real-valued functions f(x,y) defined on the unit square I2 = I x I. Let be an affine transformation on I2→I2, that is:

(1)

for some 2x2 matrix Ai and 2x1 vector bi. Let Di⊂I2 be some sub domain of the unit square I2 and let Ri be the range of operating on Di, that is, . We can now define wi operating on images f(x,y) by:

(2)

provided is invertible and.

The constant si expands or contracts the range of values of f and so, in the context of grayscale images, it controls contrast. Similarly, the constant oi raises or lowers the grayscale values and so controls brightness. The transformation is called the spatial part of wi. Fractal coding is basically a search process and consists of the following steps: partitioning the image into sub-images or blocks, search for parts of the images or blocks which are self-similar in a statistical sense and once a match is located, compute the transformations. Basic compression procedure (Thakur and Kakde, 2006) is summarized as follows:

Partition the original image to get the non-overlapping range blocks Ri as shown in Fig. 1
Partition the original image to get the overlapping domain blocks Di as shown in Fig. 2
The domain blocks are filtered and sub-sampled so that it shrinks to match the size of the range blocks, i.e., DDi as shown in Fig. 3
For each codebook block DDi, find the values of and by referring Ri, Di and using least squares regression method Eq. 3 and 4. The coefficients s and o are called scaling and offset. Given a pair of blocks Ri and Di of n pixels with intensities rl,...rn and dl,...,dn and we have to minimize the quantity:

The minimum of R occurs when the partial derivatives with respect to s and o are zero, which occurs when:

(3)

and

Fig. 1: Range blocks

Fig. 2: Domain blocks

Fig. 3: Shrinking of domain block and matching with range block

(4)

Quantize the coefficients s and o using a uniform quantizer
Compute the error E(Ri, DDi) using (5) and quantized coefficients s and o. With s and o given the square error is:

Table 1: Eight transformations used for mapping of domain blocks to range blocks

(5)


Among all blocks DDi find the block DDt with minimal error E(Ri, DDi) = min E(Ri, DDi)
Output the code for the current range block consisting of indices for the quantized coefficient s and o and the index l identifying the optimal block DDi

The main problem of fractal compression is a computational complexity of the algorithm. To construct a contractive mapping, one is to transform and then compare a large quantity of parts or blocks of an image. For every block from some sets, which is called the set of range blocks, one looks for a block from another set, which is called the set of domain blocks, in such a way that the distance between two blocks would be minimum among all domain blocks from the corresponding set. To work out this, the domain blocks to be shrinked to meet the size of the range block and then the matching of shrinked block with concern range block is to be performed on the basis of distance calculation. To check the distance, these domain blocks have to be checked with the eight transformations as shown in Table 1.

OPEN AND KEY ISSUES IN FRACTAL CODING

Following are the open issues in the fractal coding:

How to form the range blocks and the domain blocks? i.e., Partitioning
How to search the perfect domain block match for the range block? i.e., Speed up
How to minimize the encoding time? i.e., Speed up
How to optimize the domain pool? i.e., Speed up
How to classify the domain pool? i.e., Speed up
How to use neural network for fractal image compression? i.e., Speed up
to use the fractal coding? i.e., Applications

To answer the all questions, the following work is reported in literature and still these issues are open for research.

Partitioning in fractal coding: Fractal coding is the most focused area for the researchers who are working in image compression domain. Fractal coding is generally applied on the gray level images. Encoding time is much higher in comparison with the decoding. Fractal coding is based on the IFS (Iterated Function System). Presently, literature reports the use of PIFS (Partitioned Iterated Function System) for the Fractal coding. Traditional fractal image coding is essentially block-based (Wohlberg and Jager, 1999); the image domain is decomposed into square range and domain blocks and a contraction mapping is found that best maps domain blocks into range blocks (Jacquin, 1992). This contraction mapping defines a fractal code of the image. Similar principle can be applied in the spectral domain, for example by means of the Discrete Cosine Transform (DCT) applied to each range and domain block prior to finding the contraction mapping (Zhao and Yuan, 1996; Barthel et al., 1994).

Basic key issues in Fractal coding are- Partitioning schemes, Optimization of search procedures, Parallel implementation of Fractal coding, Hybrid methods with Fractal coding, Encoding time, Domain pool classification and application of Fractal coding to color images etc. In literature, the reported work is particularly related to the gray level images; very less work took place related to the color images. Mostly, the reported work is on encoding time. In this section, the reported work is summarized with respect to the key issues.

Generally the Fractal coding is implemented with square blocks. Attempts have also been made to extend fractal coding beyond uniform square blocks in order to adapt the coding to local image characteristics and consequently increase coding performance. This has lead to either square or rectangular non-uniform partition schemes such as quadtree (Fisher, 1995; Saupe and Jacobs, 1997) and horizontal-vertical partitioning (Fisher and Menlove, 1995); the usual coarse-to-fine approach starts from a maximum-size range block and perform recursive partitioning of blocks according to a quality metric. In fine-to-coarse quadtree partitioning (Lu, 1997) small blocks are merged into larger blocks within a quadtree. Alternatively, neighboring range blocks can be merged without the quadtree restriction; rightangled irregularly-shaped range blocks results are given (Thomas and Deravi, 1995; Saupe and Ruhl, 1996; Ruhl et al., 1997; Breazu and Toderean, 1998; Hartenstein et al., 2000). Koli and Ali (2006) analyzed the effect of variable sizes of range and domain blocks in fractal coding. The quadtree partition is not the optimal way of compressing images (Fisher, 1994). Other methods like the triangular partition (Saupe and Jacobs, 1997; Davoine et al., 1996) and irregular partition (Fisher, 1994; Lin and Venetsanopoulos, 1998) yield better image quality.

Saupe et al. (1998) discussed the construction of rate-distortion optimal partitions. The partition is hierarchical corresponds to a tree employed with a pruning strategy based on the generalized BFOS algorithm. In (Hartenstein and Saupe, 1998), for partitioning, the merging criteria that depend on variance or collage error and on the Euclidean length of the partition boundaries are discussed. Saupe and Ruhl (1996) presented a fractal coder that derives highly image-adaptive partitions and corresponding fractal codes using a region-merging approach. In (Belloulata and Konrad, 2002), rectangular range and domain blocks are used but boundary blocks are divided into segments belonging to different regions. Ochotta and Saupe (2004) proposed a fractal image compression method based on split/merge image partitions. In the splitting phase an image adaptive quadtree partition is generated. Neighboring blocks are iteratively merged using an appropriate error criterion.

Speed up of fractal coding: Encoding time improvement related research is mentioned by Kominek (1995), Rejeb and Anheier (1997), Saupe (1996), Sankaranarayanan (1998), Wu (2000), Harenstein and Saupe (2000), Tong and Wong (2002), Hashemian and Marivada (2003), Wang and Kao (2004), Hassaballah et al. (2005) and Zhang and Zhao (2006).

In speed up scheme for fractal image coding based on the classification methods are introduced where the domain pool is reduced which minimizes the number of operations for the similarity search (Rejeb and Anheier, 1997). Nearest neighbor search approach is proposed by Saupe (1996). Computational complexity in the encoding step is minimized using the partitioning method (Sankaranarayanan, 1998). A method of regional search for fractal image compression and decompression is proposed (Wu, 2000). In this method, the search for fractal codes is carried out in a region of the image instead of over the whole image. The proposed method (Harenstein and Saupe, 2000) is particularly well suited for use with highly irregular image partitions for which most traditional (lossy) acceleration schemes lose a large part of their efficiency. In this study, the FFT-based cross correlation is used to reduce the time complexity of fractal image encoding.

An improved formulation of approximate nearest neighbor search (Tong and Wong, 2002) based on orthogonal projection and prequantization of the fractal transforms parameters is given. The approachin reduced the memory requirement and speeds up the reconstruction (Hashemian and Marivada, 2003). Encoding algorithm (Wang and Kao, 2004) based on the law of cosines. The number of domain blocks searched to find the best match for each range block is reduced by eliminating the ineligible domain blocks using the law of cosines. The algorithm uses the knowledge about distances and angles between range and contracted domain vectors (blocks) and a reference constant vector to reduce the domain blocks searching complexity. In Hassaballah et al. (2005), encoding complexity is reduced by minimizing the size of the domain pool based on the Entropy value of each domain block. In Zhang and Zhao (2006), a recursive scheme is proposed where feeding the coding results back to update domain pools during the coding process to improve the decoded image quality is discussed.

Parallel implementation of Fractal coding is discussed by Uhl and Hammerle (1997), Hufnagl and Uhl (2000), Nagano et al. (1999) and Lee et al. (2000). In Uhl and Hammerle (1997), parallel algorithms for fractal image compression on MIMD architectures are introduced, classified and discussed. Hufnagl and Uhl (2000) introduced and analyzed algorithms for fractal image compression on massively parallel SIMD arrays. Nagano et al. (1999) proposed a method for implementing fractal image compression on dynamically reconfigurable architecture. A parallel architecture for quadtree based fractal image coding is proposed by Lee et al. (2000).

Some hybrid Fractal encoding schemes are discussed by Barthel et al. (1994), Hamzaoui et al. (1996) and Melnikov and Katsaggelos (2002). Barthel et al. (1994) proposed image coding scheme based on a unification of fractal and transform coding. A hybrid scheme combining fractal image compression with mean-removed shape-gain vector quantization is presented by Hamzaoui et al. (1996). This algorithm is based on a distance classification fractal coder with fixed cluster centers, decides whether to encode a range block by a cluster center or by a domain blocks.

In a hybrid fractal and Discrete Cosine transform (DCT) coder is developed using QT decomposition (Melnikov and Katsaggelos, 2002). Other hybrid schemes using wavelets have been proposed as well (Davis, 1998; Krupnik et al., 1995; Belloulata et al., 1998).

Evolutionary and genetic approach to Fractal compression is discussed by Saupe and Ruhl (1996) and Vences and Rudomin (1997). In functional programming technique in Haskell language is explained to model fractal image compression (Curtis and Martin, 2005). A progressive fractal image coder in the spatial domain is proposed by Kopilovic et al. (2001).

Fractal image compression based on the theory of Iterated Function System (IFS) with probabilities is explained by Mitra et al. (2001). A tree topology based approach is presented by Sun and Zhao (2003) for parallel implementation of Fractal coding.

Image compression with neural network: Artificial Neural Networks (ANN) has been used for solving many problems, special in cases where the results are very difficult to achieve by traditional analytical methods. There have already been a number of studies published applying ANN to image compression (Namphol et al., 1996; Russo and Real, 1992; Jiang, 1999; Costa and Fiori, 2001; Dony and Haykin, 1995). It is important to emphasize, although there is no sign that neural networks can take over the existing techniques, research on neural networks for image compression are still making some advances. Possibly in the future this could have a great impact to the development of new technologies and algorithms in this area.

J. Stark first proposed a research to apply the neural network on IFS (Crilly et al., 1991; Stark, 1991a, b). His method was using Hopfield neural network to solve the linear progressive problem and got the Hutchinson metric quickly (Tank and Hopfield, 1986; Stark, 1991b). However, his neural network approach cannot obtain the fractal code automatically.

A few methods of optimization of exhaustive search (Jacquin, 1992; Ramamurthi and Gersho, 1986; Frigaard et al., 1994; Welstead, 1999; Hamzaoui, 1997; Ponomarenko et al., 2000), which were based on clustering of the set of domain blocks, were suggested. But the majorities of these methods either decrease lightly the computational complexity or result in high losses of the quality of an image. The method of clustering by means of Artificial Kohonen neural self optimizing network is least afflicted with these disadvantages (Kohonen, 2001). For the first time, the Kohonen network was used for optimization of fractal compression by Bogdan and Meadows (1992).

Fractally configured neural networks (Stark, 1991a, Merrill et al., 1991; Weinberger et al., 1996) based on IFS (iterated function systems (Barnsley and Hurd, 1992)) codes represent another example along the direction of developing existing image compression technology into neural networks.

Fig. 4: IFS neural network

Its conventional counterpart involves representing images by fractals and each fractal is then represented by so-called IFS which consist of a group of affine transformations. To generate images from IFS, random iteration algorithm is the most typical technique associated with fractal based image decompression (Barnsley and Hurd, 1992). Hence, fractal based image compression features higher speed in decompression and lower speed in compression.

By establishing one neuron per pixel, two traditional algorithms of generating images using IFSs are formulated into neural networks in which all the neurons are organized as a topology with two dimensions (Stark, 1991a). The network structure can be shown in Fig. 4 in which wij, i`j` is the coupling weight between (ij)th neuron to (i`j`)th one and sij is the state output of the neuron at position (i, j).

The training algorithm is directly obtained from the random iteration algorithm in which the coupling weights are used to interpret the self similarity between pixels (Stark, 1991a). Since not all the neurons in the network are fully connected, the topology can actually be described by a sparse matrix theoretically. In common with most neural networks, the majority of the work operated in the neural network is to compute and optimize the coupling weights,. Once these have been calculated, the required image can typically be generated in a small number of iterations.

Hence, the neural network implementation of the IFS based image coding system could lead to massively parallel implementation on a dedicated hardware for generating IFS fractals. Although the essential algorithm stays the same as its conventional algorithm, solutions could be provided by neural networks for the computing intensive problems which are currently under intensive investigation in the conventional fractal based image compression research area.

The neural network proposed in the original reference (Stark, 1991a) can only complete image generations from IFS codes.

Fig. 5: Basic fragment of neural network architecture

Research in fractal image compression, however, focuses on developing fast encoding algorithms before real-time application can be improved. This involves extensive search for affine transformations to produce fractals for a given image and generating output entropy codes. The study described by Stark (1991a) actually covered the decompression side of the technology. In addition, the neural network requires a large number of neurons arranged in two dimensions which is the same as the number of pixels in any image. Yet large number of the neurons is not active according to the system design.

In literature, not much more work is reported related to fractal coding with neural network. Basic neural networks which are mentioned in literature are: Self-organizing Kohonen map; Error back propagation, feed forward neural network, Hopfield neural network. The basic fragment of neural network architecture is shown in Fig. 5. Here the basic idea is that for any range and domain block of any size, ultimately the four pixels are mapped into the one pixel.

Fractal coding to color images: Fractal coding is generally applied on the gray level images. To apply the fractal coding on the color images, the image is represented in the gray level and later the fractal coding is applied. For instance, in case of RGB color space images, three color planes are separate out as R, G and B planes and individually these planes are the gray level planes. On these planes the fractal coding is applied independently. This is the straight fractal coding. Partitioning schemes explained in above section can be used in color image compression. To speed up the encoding process, the above mentioned approaches or methodologies can be used.

Application of neural network to fractal image compression reduces the encoding time which can be saved through the minimization of domain pool using local search and no transformations are required in neural network implementation as the weights are updated through the training of neural network. For the given color image, it is first get converted to the gray level image by separating three color planes on which the neural network can be applied separately. In comparison with the straight fractal coding, in neural network implementation, the encoding time will be less with little trade off in image quality and compression ratio.

Applications of fractal coding: Fractal coding can be applied on gray level images and color images. Fractal coding is suitable for the image domains where the image reconstruction quality does not matter. In addition, fractal coding can be used in image retrieval. Only the problem of fractal coding is the encoding time, it takes huge time in comparison with the other methods. Still this area is not explored in depth, so this can be the open domain for the researchers.

DISCUSSION

In this study, the basics of fractal coding are discussed. In addition, the open issues in the fractal coding are summarized through the discussion of existing work. For the execution of fractal coding, the given image first divided into the non-overlapping and overlapping blocks. This is nothing but the partitioning of the image. To do this numerous techniques are found in the literature e.g., horizontal partitioning; triangular partitioning, quad tree partitioning etc. But still there is a scope of research in this area of fractal coding.

Particularly, the encoding complexity of the fractal coding is the key issue. From this point of view, numerous works took place and is discussed in this study. For the optimization of encoding time, domain block clustering; domain classification and domain blocks minimization etc. related approaches are found in the literature. Encoding time optimization is the key problem or limitation of the fractal coding. Numbers of approaches is reported in literature and are briefly discussed in the study. Apart from this, the application of fractal coding to color images is also discussed. Different color spaces are available to represent the image.

One of the other approach from encoding time optimization is the neural network based approach. Neural network based approach for fractal image compression can be work out. Basically, the self-organizing maps are used for this implementation. In this study, the basic fragment of neural network for possible implementation of fractal coding with neural network is discussed. For color images, same neural architecture can be used for three times in the case of RGB color space image. This study can be useful for neural network based fractal color image compression approach. For this, basic neural architecture in Fig. 5 can be used.

For the optimization of encoding time, other hybrid approach can also be possible, e.g., genetic algorithm can be used for domain pool optimization which can indirectly help in encoding time saving. Simulated annealing, ant colony optimization, rough sets can also be used for optimization of domain pool.

Finally, this study can be useful for the beginners in the fractal image compression for gray and color images. This research may help for the straight fractal coding and neural network based approach. Still, huge research is going on optimization i.e., minimization of encoding time because it is the constraint which restricted the use of fractal coding and there is a large scope to contribute in this area by the new researchers who are working in this domain.

REFERENCES

  • Barnsley, M.F. and L.P. Hurd, 1992. Fractal Image Compression. 1st Edn., AK Peters Ltd., Boston, pp: 15-35


  • Barnsley, M., 1988. Fractals Everywhere. 1st Edn., Academic Press, Boston, pp: 11-24,


  • Barthel, K.U., J. Schuttemeyer, T. Voye and P. Noll, 1994. A new image coding technique unifying fractal and transform coding. Proceedings of the International Conference on Image Processing, November 13-16, 1994, Austin Texas, pp: 112-116.


  • Belloulata, K. and J. Konrad, 2002. Fractal image compression with region-based functionality. IEEE Transact. Image Process., 11: 351-362.
    PubMed    


  • Belloulata, K., A. Baskurt, H. Benoit-Cattin, and R. Prost, 1998. Fractal coding of subbands with an oriented partition. Signal Process. Image Commun., 12: 243-252.
    CrossRef    


  • Bogdan, A. and H.E. Meadows, 1992. Kohonen neural network for image coding based on iteration transformation theory. Proc. SPIE Neural Stochastic Methods Image Signal Process., 1766: 425-436.
    Direct Link    


  • Breazu, M. and G. Toderean, 1998. Region-based fractal image compression using deterministic search. Proceedings of IEEE International Conference on Image Processing ICIP’98, October 04-07, Chicago, USA., pp: 742-746.


  • Costa, S. and S. Fiori, 2001. Image compression using principal component neural networks. Image Vision Comput., 19: 649-668.
    CrossRef    Direct Link    


  • Crilly, A.J., R.A. Eamshaw and H. Jones, 1991. Fractals and Chaos. 1st Edn. Springer-Verlag, London, pp:145-164


  • Curtis, S.E. and C.E. Martin, 2005. Functional fractal image compression. Proceedings of the 6th Symposium on Trends in Functional Programming, September 23-24, 2005, Tallinn, Estonia, pp: 383-398.


  • Davis, G.M., 1998. A wavelet-based analysis of fractal image compression. IEEE Trans. Image Process., 7: 141-154.
    CrossRef    


  • Davoine, F., M. Antonini, J.M. Chassey and M. Barlaud, 1996. Fractal image compression based on delaunay triangulation and vector quantization. IEEE Trans. Image Process., 5: 338-346.
    PubMed    


  • Fisher, Y. and S. Menlove, 1995. Fractal Encoding with hv Partitions. In: Fractal Image Compression: Theory and Applications to Digital Images, Fisher, Y. (Ed.). Springer-Verlag, London, UK, pp: 119-136


  • Fisher, Y., 1992. Fractal image compression. SIGARAPH 92 Course Notes, The San Diego Super Computer Center, University of California, An Diego.


  • Fisher, Y., 1994. Fractal Image Compression-Theory and Application. 1st Edn., Springer-Verlag, New York


  • Fisher, Y., 1995. Fractal Encoding with Quadtrees. In: Fractal Image Compression: Theory and Applications to Digital Images, Fisher, Y. (Ed.). Springer-Verlag, New York, pp: 55-77


  • Frigaard, C., J. Gade, T. Hemmingsen and T. Sand, 1994. Image compression based on a fractal theory. Internal Report S701, Institute for Electronics System, Aalborg University, Denmark.


  • Hamzaoui, R., 1995. Codebook clustering by self-organizing map for fractal image compression. Fractals, 5: 27-38.
    Direct Link    


  • Hamzaoui, R., M. Muller and D. Saupe, 1996. Enhancing fractal image compression with vector quantization. Proceedings of the Digital Signal Processing Workshop, September 1-4, 1996, Loen, Norway, pp: 231-234.


  • Harenstein, H. and D. Saupe, 2000. Lossless Acceleration of Fractal Image Encoding Via the Fast Fourier Transform. 1st Edn., Elsevier, Germany, pp: 383-394
    Direct Link    


  • Hartenstein, H. and D. Saupe, 1998. Cost-based region growing for fractal image compression. Proceedings of the EUSIPCO, September 8-11, 1998, Rhodes, Greece, pp: 2313-2316.


  • Hartenstein, H., M. Ruhl and D. Saupe, 2000. Region-based fractal image compression. IEEE Trans. Image Process., 9: 1171-1184.
    CrossRef    Direct Link    


  • Hashemian, R. and S. Marivada, 2003. Improved image compression using fractal block coding. Proceedings of the 46th International Midwest Symposium onCircuits and Systems, December 27-30, 2003, Cairo, Egypt, pp: 544-547.


  • Hassaballah, M., M.M. Makky and Y.B. Mahdy, 2005. A fast fractal image compression method based entropy. Elect. Lett. Comput. Vision Image Anal., 5: 30-40.
    Direct Link    


  • Hufnagl, C. and A. Uhl, 2000. Algorithms for fractal image compression on massively parallel simd arrays. Real-Time Imag. J., 6: 267-281.
    CrossRef    Direct Link    


  • Jacquin, A.E., 1992. Image coding based on a fractal theory of iterated contractive image transformations. IEEE Trans. Image Process., 1: 18-30.
    CrossRef    


  • Jiang, J., 1999. Image compression with neural networks-a survey. Signal Process. Image Commun., 14: 737-760.
    CrossRef    


  • Kohonen, T., 2001. Self-Organization Maps. Springer Series in Information Sciences, Springer-Verlag, New York


  • Koli, N.A. and M.S. Ali, 2006. Lossy color image compression technique using fractal coding with different size of range and domain blocks. Proceedings of the International Conference on Advanced Computing and Communications, December 20-23, 2006, Surathkal, India, pp: 236-239.


  • Kominek, J., 1995. Algorithms for fractal image compression. Proceedings of the IS and T/SPIE Symposium on Electronic Imaging: Science and Technology, 2419, Digital Video Compression: Algorithms and Technologies, February 1995, San Jose, CA., pp: 1-10.


  • Kopilovic, I., D. Saupe and R. Hamzaoui, 2001. Progressive fractal coding. Proceedings of the ICIP-IEEE International Conference on Image Processing, October 7-10, 2001, Thessaloniki, Greece, pp: 86-89.


  • Krupnik, H., D. Malah and E. Karnin, 1995. Fractal representation of images via the discrete wavelet transform. Proceedings of the 18th Convention of Electrical and Electronics Engineers in Israel, March 7-8, 1995, IEEE Xplore, London, pp: 2.2.2/1-2.2.2/5.


  • Lee, S., S. Omachi and A. Hirotomo, 2000. A parallel architecture for quadtree-based fractal image coding. Proceedings of the International Conference on Parallel Processing, August 21-24, 2000, IEEE Computer Society Washington, DC, USA., pp: 15-22.


  • Lin, H. and A.N. Venetsanopoulos, 1998. Fast fractal image compression using pyramids. Optical Eng. J., 36: 1720-1731.


  • Lu, N., 1997. Fractal Imaging. 1st Edn., Academic Press, New York, USA


  • Melnikov, G. and A.K. Katsaggelos, 2002. A jointly optimal fractal/DCT compression scheme. IEEE Trans. Multimedia, 4: 413-422.
    CrossRef    


  • Merrill, J.W.L. and R.F. Port, 1991. Fractally configured neural networks. Neural Networks, 4: 53-60.
    CrossRef    


  • Mitra, S.K., C.A. Murthy, M.K. Kundu, B.B. Bhattacharya and T. Acharya, 2001. Fractal image compression using interated function system with probabilities. Proceedings of the International Conference on Information Technology: Coding and Computing, April 2-4, 2001, Las Vegas, NV, USA., pp: 191-195.


  • Nagano, H., A. Matsuura and A. Nagoya, 1999. An efficient implementation method of fractal image compression on dynamically reconfigurable architecture. Proceedings of the 6th Reconfigurable Architecture Workshop (RAW '99) associated with 13th International Parallel Processing Symposium and 10th Symposium on Parallel and Distributed Processing, April 12-16, 1999, Springer LNCS 1586, San Juan (Puerto Rico),USA., pp: 670-678.


  • Namphol, A., S. Chin and M. Arozullah, 1996. Image compression with a hierarchical neural network. IEEE Trans. Aerospace Elect. Syst., 32: 326-338.
    CrossRef    


  • Ochotta, T. and D. Saupe, 2004. Edge-based partition coding for fractal image compression. The Arabian J. Sci. Eng., 29: 63-63.
    Direct Link    


  • Ponomarenko, N., K. Egiazarian, V. Lukin and J. Astola, 2000. Speeding-up the fractal compression with clustering. Proceedings of the 5th All-Ukrainian International Conference on Signal/Image Processing and Pattern Recognition, November 27-December 1, 2000, Kiev, Ukraine, pp: 55-58.


  • Ramamurthi, B. and A. Gersho, 1986. Classified vector quantization of images. IEEE Trans. Commun., 34: 1105-1115.
    CrossRef    


  • Rejeb, B. and W. Anheier, 1997. A new approach for the speed up of fractal image coding. Proceedings of the 13th International Conference on Digital Signal Processing, July 2-4, 1997, Santorini, Greece, pp: 853-856.


  • Ruhl, M., H. Hartenstein and D. Saupe, 1997. Adaptive partitioning for fractal image compression. Proceedings of the International Conference on Image Processing, October 26-29, 1997, Santa Barbara, USA., pp: 310-313.


  • Russo, L.E. and C.E. Real, 1992. Image compression using outer product neural network. Proceedings of the International Conference on Acoustic, Speech and Signal Processing ICASSP-92, March 23-26, 1992, San Francisco, CA, USA., pp: 377-380.


  • Saupe, D. and M. Ruhl, 1996. Evolutionary fractal image compression. Proceedings of the International Conference on Image Processing, September 16-19, 1996, Lausanne, Switzerland, pp: 129-132.


  • Saupe, D. and R. Hamzaoui, 1994. A review of the fractal image compression literature. ACM SIGGRAPH Comput. Graph., 28: 268-276.
    CrossRef    Direct Link    


  • Saupe, D. and S. Jacobs, 1997. Variance-based quadtrees in fractal image compression. Elect. Lett., 33: 46-48.
    CrossRef    


  • Saupe, D., 1996. Fractal image compression via nearest neighbor search. Proceedings of the NATO ASI Conference on Fractal Image Encoding and Analysis, July 8-17, 1996, Trondheim, Norway, pp: 95-108.


  • Saupe, D., M. Ruhl, R. Hamzaoui, L. Grandi and D. Marini, 1998. Optimal hierarchical partitions for fractal image compression. Proceedings of the International Conference on Image Processing (ICIP'98), October 4-7, 1998, Chicago, USA., pp: 737-741.


  • Stark, J., 1991. A neural network to compute the Hutchinson metric in fractal image processing. IEEE Trans. Neural Network, 2: 156-158.
    PubMed    Direct Link    


  • Stark, J., 1991. Iterated function systems as neural network. Neural Network, 4: 679-690.
    CrossRef    


  • Sun, Y.D. and Y. Zhao, 2003. A parallel implementation of improved fractal image coding based on tree topology. Chinese J. Elec., 12: 152-156.


  • Tank, D. and J. Hopfield, 1986. Simple neural optimization networks: An A/D converter, Signal Decision Circuit and a linear programming circuit. IEEE Trans. Circuits Syst., 33: 533-541.
    Direct Link    


  • Thakur, N.V. and O.G. Kakde, 2006. Fractal color image compression on a pseudo spiral architecture. Proceedings of the International Conference on Cybernetics and Intelligent Systems, June 7-9, 2006, Bangkok, Thailand, pp: 1-6.


  • Thomas, L. and F. Deravi, 1995. Region-based fractal image compression using Heuristic search. IEEE Trans. Image Process., 4: 823-838.
    Direct Link    


  • Tong, C.S. and M. Wong, 2002. Adaptive approximate nearest neighbor search for fractal image compression. IEEE Trans. Image Process., 11: 605-615.
    PubMed    


  • Uhl, A. and J. Hammerle, 1997. Fractal image compression on mimd architectures I: Basic algorithms. Parallel Algorithms Appl., 11: 187-204.
    CrossRef    


  • Vences, L. and I. Rudomin, 1997. Genetic algorithms for fractal image and image sequence compression. Proceedings of Computacion Visual, pp: 35-44.


  • Wang, C.C. and J.Y. Kao, 2004. A fast encoding algorithm for fractal image compression. IEICE Elect. Exp., 1: 352-357.
    CrossRef    Direct Link    


  • Weinberger, M.J., J.J. Rissanen and R.B. Arps, 1996. Applications of Universal context modelling to Lossless compression of gray-scale images. IEEE Trans. Image Process., 5: 575-586.
    CrossRef    Direct Link    


  • Welstead, S., 1999. Fractal and Wavelet Image Compression Techniques. 1st Edn. SPIE Optical Engineering Press, Washington


  • Wohlberg, B. and G. de Jager, 1999. A review of the fractal image coding literature. IEEE Trans. Image Process., 8: 1716-1729.
    CrossRef    Direct Link    


  • Wu and Pou-Yah, 2000. Fast fractal image compression. Proceedings of the International Conference on Information Technology: Coding and Computing, March 27-29, 2000, Las Vegas, NV, USA., pp: 54-59.


  • Zhang, Z. and Y. Zhao, 2006. Improving the performance of fractal image coding. Int. J. Innovative Comput. Inform. Control, 2: 387-398.
    Direct Link    


  • Zhao, Y. and B. Yuan, 1996. A hybrid image compression scheme combining block-based fractal coding and dct. Signal Process., Image Commun., 8: 73-78.
    CrossRef    


  • Sankaranarayanan, V., 1998. Fractal image compression. EE381K-14 Multidimensional Digital Signal Processing Projects Guided by Dr. Brain L. Evans, University of Texas, Austin.


  • Dony, R.D. and S. Haykin, 1995. Neural network approaches to image compression. IEEE Proc., 83: 288-303.
    CrossRef    

  • © Science Alert. All Rights Reserved