The rapid growth and development of electronic imaging in the recent years has led to large scale digital media archives. These are increasingly becoming popular as more and more digital media contents are created and deployed online every day. A critical issue in designing such archives is effective storage of the data. Uncompressed data requires more storage and huge bandwidth for transmission. Though the cost of storage is rapidly dropping, compression still remains as a challenging issue due to the growing number of multimedia based online applications. This necessitates the design of highly efficient image compression systems which promise good image quality and compression ratios with low computational complexity. This book is an outcome of the research in vector quantization based methods for compressing still images. It discusses novel image compression methods with performance analysis using standard compression metrics and their vital role in real-time applications. The proposed methods can be used in applications like Medical Image Processing, Mobile Applications, Biometrics, Remote Sensing and other online web applications.
Applications, which need to store large database and/or transmit digital images requiring high bit-rates over channels with limited bandwidth, have demanded improved image compression techniques.. Encoding an image into fewer bits is useful in reducing the storage requirements in image archival systems, or in decreasing the bandwidth for image transmission. In standard image compression methods (e.g. JPEG), as the bits per pixel reduces, the picture quality deteriorates because of the use of bigger quantization step size. In this work, image compression techniques are designed keeping in mind the human visual system. Practical and effective image compression system based on Neuro-Wavelet models have been proposed which combines the advantages of neural network and wavelet transform with vector quantization. Fuzzy c-means and Fuzzy vector quantization algorithms have also been used to make use of uncertainty for the benefit of the clustering process. We have compared the performances of different clustering algorithms applied to the proposed encoder. Experimental results on real images of varying complexity have established the robustness and effectiveness of the method
Most image acquisition and editing tools use the JPEG standard for image compression. Quantization table estimation is essential for establishing bitmap compression history, which is particularly useful in applications like image authentication, JPEG artifact removal, and JPEG re-compression with less distortion. The histogram of Discrete Cosine Transform DCT coefficients contains information on the compression parameters for single JPEG compressed and previously compressed bitmaps. One method proposed here is based on inspecting the peaks of the histogram of DCT to estimate quantization steps. Another, based on streamed DCT coefficients, reconstructs dequantized DCT coefficients which are then used with their corresponding compressed values to estimate quantization steps. Extending the two methods to bitmaps proves very helpful in identifying previous compression, and quantization tables if any. The estimated table is used with two distortion measures; blocking artifact, and average distortion, for inspecting possible local forgeries. The methods score poorly or fail with heavy or double compression.
This book is concerned with the image compression methods to advanced video coding algorithms & thus yields an exclusive, self-contained reference for practitioners to build a basis for future study, research, & development. It introduces image compression methods in the case of binary, gray scale, color palette, true color & video images. It works up from basic principles to the advanced video compression systems including MPEG 1, 2, 4 & 7, JPEG, H.261, H.263 & H.264/MPEG-4 AVC. Topics include fractal-based compression, statistical modeling, context modeling, Huffman coding, arithmetic coding, Colomb coding, transform-based modeling, run-length modeling, predictive modeling, progressive image decompression, vector quantization. Existing & forthcoming standards such as JBIG1, JBIG2, JPEG, JPEG-LS & JPEG-2000 are covered. For each algorithm, issues such as quality vs. bit rate vs. MSE and BPP, susceptibility to channel errors & implementation complexities are considered. It contains a lots of reconstructed & error images illustrating the outcome of each compression technique on a consistent image set, thus allowing for a direct assessment of bit rates & reconstructed image quality.
This book presents a novel approach for Face Recognition using ‘Vector Quantization’. Face Recognition is one of the popular biometric techniques used in today’s era. A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. Vector quantization is simple image compression technique. It is efficient for image coding because it reduces computational complexity. VQ compression is highly asymmetric in processing time: choosing an optimal codebook takes huge amounts of calculations, but decompression is lightning-fast—only one table lookup per vector. This makes VQ an excellent choice for face recognition. In this book four different VQ algorithms namely LBG, KPE, KMCG and KFCG are used to observe the efficiency of face recognition system. Efficiency is calculated in terms of recognition rate and computational complexity. It has been observed that KPE, KMCG and KFCG outperform LBG which is known as benchmark in vector quantization. Proposed techniques are compared with traditional DCT and Walsh transform also. It proves better than transform techniques.
Different methods for compression were suggested and investigated; these methods exploit the space segmentation, frequency transform, and quantization features. Various transforms schemes were test to involve different methods combinations. Image coding algorithms based on coupling frequency domain with spatial domain has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation. Space segmentation is an important step in analyze the interested reign. This step works as a feature classification to determine the way to derive with each segmented region. The region is a variable size block, which step over the disadvantages of fixed size blocks by using the Quadtree partitioning. Wavelet Transform provides a framework of variety of research areas. In this book the architecture of Lifting Scheme was implemented by adopting the Wavelet filters 9/7tap and 5/3tap. Vector Quantization is a well-known technique among the blocks based compression techniques. Several modifications have been proposed, implemented and tested, the results indicated that the performance of the suggested system is well acceptable.
Conventional information retrieval is based solely on text, and the approaches to textual information retrieval have been transplanted into image retrieval in a variety of ways, including the representation of an image as a vector of feature values of different modalities. It has been widely recognized that the image retrieval techniques should become an integration of different modalities, such as color, texture and associated text keywords. To take the cue from text-based retrieval techniques, we construct “visual keywords” using vector quantization of small sized image tiles. Both visual and text keywords are combined and used to represent an image as a single multimodal vector. We demonstrate the power of these multimodal image keywords for clustering and retrieval of relevant images from a large collection.
This book presented a new medical image compression technique based on improved statistical modelling of subband discrete wavelet transform coefficients. Performance of the coder is further improved by using Region-of-Interest coding. Concepts of image compression techniques are clearly explained. Statistical modelling is explained and analysed the behaviour with different medical images. A new quantization scheme, region of interest (ROI) space frequency quantization (SFQ) is explain and analysed the behaviour with different type of medical images and also analysed the effect of wavelet filters over ROI-SFQ coder. References to some of the best traditional (and non-traditional) texts and papers are given for further application-specific study
Various compression methods have been proposed to achieve high compression ratios and high image qualities in low computation time. One of these methods is Fractal Image Compression. The basic idea of fractal image compression is the partitioning of input image into non-overlapping range blocks. For every range block a similar but larger domain block is found. The set of coefficients of mapping the domain blocks to the range block, using affain transform, is recorded as compression data. The compressed image data set is called the Iterated Function System (IFS) mapping set. Decoding process applies the determined IFS transformations on any initial image, and the process is repeated many times till reaching the attractor.
This book attempts to develop a Low Power VLSI Architectures for Neural Network based Image Compression. The power is a very important criterion, since the neural network is a parallel massive structure and hence consumes more power. Hence in this book it is explained to develop, design and implement dedicated low power VLSI architectures for image compression based on neural networks, optimizing for speed, area and power. In this book it is also explained new architecture for Neural Network based image compression for ASIC implementation. The results for different architectures are also explained with the ASIC implementation results obtained for complexity, power, area and speed.
Image compression involves reducing the size of image data files, while retaining necessary information. Compression is a necessary and essential method for creating image files with manageable and transmittable sizes. There have been many types of compression algorithms developed. In this work a different methods of image compression were used in order to compress MRI (Magnetic Resonance Images) and to transmit these images by using serial port RS232. The compression of the medical images is very important for the purpose of telemedicine, so, by the compression technique, the size of the medical images will be reduced, then the time of image transmission will be reduced too, this feature is very necessary specially in emergencies cases. Also, the cost of communication (image transmission) will be reduced.
Content Based Image Retrieval means that the search makes use of the contents of the images themselves, rather than relying on human input metadata such as captions or keywords. By content-based techniques, a user can specify contents of interest in a query. The contents may be colors, textures, shapes, or the spatial layout of target images. In this book we have proposed a CBIR system which is implemented with the help of combination of features. Block Truncation Coding (BTC) is mainly used for image compression. The proposed method is a modification in original Block Truncation Coding called as Modified BTC (MBTC) for content based image retrieval system. Texture features are found by calculating the standard deviation of the Gabor filtered image. Gabor Filters & Modified Block Truncation Coding based feature vector is extracted then compared with corresponding feature vector of images stored in the database. Images are retrieved based on the similarities of features. The proposed method is tested thoroughly and to assess the retrieval effectiveness precision and recall as statistical comparison parameters for the MBTC and Gabor Filter based feature vectors are used.
In this book, author proposed new offline handwritten signature Identification and Verification based on the contourlet coefficient as the feature extractor and Support Vector Machine (SVM) as the classifier. In projected method, first signature image is normalized based on size. After preprocessing, contourlet coefficients are computed on particular scale and direction using contourlet transform in feature extraction. After feature extraction, all extracted coefficients are feed to a layer of SVM classifiers as feature vector. The number of SVM classifiers is equal to the number of classes. Each SVM classifier determines if the input image belongs to the resultant class or not. The main feature of proposed method is independency to nation of signers. The proposed methodology implemented using MATLAB R2009a software tool with image processing toolbox. The research is on English signature database, based on this experiment, we achieve a 94% identification rate.
This thesis addresses the ?gure-ground segmentation problem in the context of complex systems for automatic object recognition. Firstly the problem of image segmentation in general terms is introduced, followed by a discussion about its importance for online and interactive acquisition of visual representations. Secondly a machine learning approach using arti?cial neural networks is presented. This approach on the basis of Generalized Learning Vector Quantization is investigated in challenging scenarios such as the real-time ?gure-ground segmentation of complex shaped objects under continuously changing environment conditions. The ability to ful?ll these requirements characterize the novelty of the approach compared to state-of-the-art methods. Finally the proposed technique is extended in several aspects, which yields a framework for object segmentation that is applicable to improve current systems for visual object learning and recognition.
In this Book Image compression using Fast DCT and fast DWT based on different wavelet families for various sizes of image frame at varying decomposition level is studied. The experiments revealed that compression of the image signals without significant degradation of the visual quality is possible as they contain a high degree of redundant information not perceived by the other image compression techniques.The removal procedure is achieved via the most common techniques: the DCT and the DWT. These techniques are tested on the various test image The noticeable blocking artifacts inherent in the DCT based reconstructed images vanish in the DWT based reconstructions without sacrificing the visual quality. Especially the images reconstructed from the deeper wavelet decomposition levels have higher PSNRs. a comparative study of different wavelet family with fast cosine transform (2D-DCT) on to own test images set has been done using MSE, PSNR and compression ratio (CR).