Adaptive Compression of Images Using High-order Entropy Coding

Adaptive Compression of Images Using High-order Entropy Coding PDF Author: Steve Shu Yu
Publisher:
ISBN:
Category :
Languages : en
Pages : 278

Get Book Here

Book Description

Adaptive Compression of Images Using High-order Entropy Coding

Adaptive Compression of Images Using High-order Entropy Coding PDF Author: Steve Shu Yu
Publisher:
ISBN:
Category :
Languages : en
Pages : 278

Get Book Here

Book Description


Context Quantization for Adaptive Entropy Coding in Image Compression

Context Quantization for Adaptive Entropy Coding in Image Compression PDF Author: Tong Jin
Publisher:
ISBN:
Category : Data compression (Computer science)
Languages : en
Pages : 0

Get Book Here

Book Description
Context based adaptive entropy coders are used in newer compression standards to achieve rates that are asymptotically close to the source entropy: separate arithmetic coders are used for a large number of possible conditioning classes. This greatly reduces the amount of sample data available for learning. To combat this problem, which is referred as the context dilution problem in the literature, one needs to balance the benefit of using high-order context modeling and the learning cost associated with context dilution. In the first part of this dissertation, we propose a context quantization method to attack the context dilution problem for non-binary source. It begins with a large number of conditioning classes and then uses a clustering procedure to reduce the number of contexts into a desired size. The main operational difficulty in practice is how to describe the complex partition of the context space. To deal with this problem, we present two novel methods, coarse context quantization (CCQ) and entropy coded state sequence (ECSS), for efficiently describing the context book, which completely specifies the context quantizer mappings information. The second part of this dissertation considers binarization of non-binary sources. Same as non- binary source, the cost of sending the complex context description as side information is very high. Up to now, all the context quantizers are designed off-line and being optimized with respect to the statistics of the training set. The problem of handling the mismatch between the training set and an input image has remained largely untreated. We propose three novel schemes, minimum description length, image dependent and minimum adaptive code length, to deal with this problem. The experimental results show that our approach outperforms the JBIG and JBIG2 standard with peak compression improvement of 24% and 11% separately on the chosen set of halftone images. In the third part of this dissertation, we extend our study to the joint design of both quantizers and entropy coders. We propose a context-based classification and adaptive quantization scheme, which essentially produce a finite state quantizer and entropy coder with the same procedure.

Comparison of Lossless Image Compression Techniques based on Context Modeling

Comparison of Lossless Image Compression Techniques based on Context Modeling PDF Author: Mohammad El-Ghoboushi
Publisher: GRIN Verlag
ISBN: 3656932832
Category : Computers
Languages : en
Pages : 85

Get Book Here

Book Description
Master's Thesis from the year 2014 in the subject Computer Science - Software, , course: Image Processing, language: English, abstract: In this thesis various methods for lossless compression of source image data are analyzed and discussed. The main focus in this work is lossless compression algorithms based on context modeling using tree structure. We are going to compare CALIC, GCT-I algorithms to the JPEG2000 standard algorithm which will be considered the reference of comparison. This work includes research on how to modify CALIC algorithm in continuous-tone mode by truncating tails of the error histogram which may lead to improve CALIC compression performance. Also, we are going to propose a modification to CALIC in binary mode by eliminating error feedback mechanism. As when any pixel to be encoded has a different grey level than any of the neighboring pixels, CALIC triggers an escape sequence that switches the algorithm from binary mode to continuous-tone mode. Which means in this case the pixel will be treated as if it was in continuous-tone region. This minor modification should improve CALIC performance in binary images. Finally, we are going to discuss the GCT-I on medical images and compare results to the JPEG2000 standard.

Optimization of Entropy Coding Efficiency Under Complexity Constraints in Image and Video Compression

Optimization of Entropy Coding Efficiency Under Complexity Constraints in Image and Video Compression PDF Author: Fan Ling
Publisher:
ISBN:
Category : Image compression
Languages : en
Pages : 288

Get Book Here

Book Description
In this dissertation, fundamentals of image/video compression system and entropy coding are introduced. Since joint entropy is always smaller than marginal entropy per data point, higher dimensional joint entropy coding improves coding efficiency. Alphabet size of joint dimension symbols increases exponentially with symbol dimension, complexity is the major concern in entropy coding technique design. Under complexity constraints, several entropy coding techniques are developed for applications in image/video coding in this dissertation: (a) Based on regular Tree-Structured VQ scheme, a simple yet efficient Tree-Structured Arithmetic Coded VQ technique (TS-AC-VQ) is presented. To keep the advantages of TSVQ, such as fast codebook training and encoding, and also reduce storage requirements, a brand technique call Tree-Structured Arithmetic Coded Lattice VQ (TS-AC-LVQ) is developed. It has both TSVQ and Lattice VQ's characteristic. The high coding efficiency of TS-AC-LVQ relies heavily on smart entropy coding. (b) As the statistics of output data from quantization stage in an image/video coding system varies with certain quantization parameters, such as quantization step QP in scalar quantization, it is efficient to adapt entropy coding to certain quantization parameters. We call such entropy coding techniques as Quantizer Scalar Entropy Coding techniques. Two such schemes are developed in code 8 x 8 DCT coefficients. (c) Based on conventional adaptive arithmetic coding, Dimensional Adaptive Arithmetic Coding is presented. This technique not only generates entropy coding table on the fly as adaptive arithmetic coding does, but also increases symbol dimensions to achieve high dimensional joint entropy coding. (d) To make the mixture of arithmetic code bitstream with fixed length code bitstream more efficient, a Zero-Look-Ahead arithmetic coded is developed.

Digital Image Compression Techniques

Digital Image Compression Techniques PDF Author: Majid Rabbani
Publisher: SPIE Press
ISBN: 9780819406484
Category : Computers
Languages : en
Pages : 248

Get Book Here

Book Description
In order to utilize digital images effectively, specific techniques are needed to reduce the number of bits required for their representation. This Tutorial Text provides the groundwork for understanding these image compression tecniques and presents a number of different schemes that have proven useful. The algorithms discussed in this book are concerned mainly with the compression of still-frame, continuous-tone, monochrome and color images, but some of the techniques, such as arithmetic coding, have found widespread use in the compression of bilevel images. Both lossless (bit-preserving) and lossy techniques are considered. A detailed description of the compression algorithm proposed as the world standard (the JPEG baseline algorithm) is provided. The book contains approximately 30 pages of reconstructed and error images illustrating the effect of each compression technique on a consistent image set, thus allowing for a direct comparison of bit rates and reconstucted image quality. For each algorithm, issues such as quality vs. bit rate, implementation complexity, and susceptibility to channel errors are considered.

Lossless Compression of Spectrally Limited Color Images

Lossless Compression of Spectrally Limited Color Images PDF Author: Bruce K. Durgan
Publisher:
ISBN:
Category : Algorithms
Languages : en
Pages : 248

Get Book Here

Book Description


Image and Text Compression

Image and Text Compression PDF Author: James A. Storer
Publisher: Springer Science & Business Media
ISBN: 1461535964
Category : Technology & Engineering
Languages : en
Pages : 355

Get Book Here

Book Description
James A. Storer Computer Science Dept. Brandeis University Waltham, MA 02254 Data compression is the process of encoding a body of data to reduce stor age requirements. With Lossless compression, data can be decompressed to be identical to the original, whereas with lossy compression, decompressed data may be an acceptable approximation (according to some fidelity criterion) to the original. For example, with digitized video, it may only be necessary that the decompressed video look as good as the original to the human eye. The two primary functions of data compression are: Storage: The capacity of a storage device can be effectively increased with data compression software or hardware that compresses a body of data on its way to the storage device and decompress it when it is retrieved. Communications: The bandwidth of a digital communication link can be effectively increased by compressing data at the sending end and decom pressing data at the receiving end. Here it can be crucial that compression and decompression can be performed in real time.

Efficient Predictive Algorithms for Image Compression

Efficient Predictive Algorithms for Image Compression PDF Author: Luís Filipe Rosário Lucas
Publisher: Springer
ISBN: 3319511807
Category : Technology & Engineering
Languages : en
Pages : 180

Get Book Here

Book Description
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is investigated for compression of Light Field images based on the HEVC technology. A new linear prediction method using sparse constraints is also described, enabling improved coding performance of the HEVC standard, particularly for images with complex textures based on repeated structures. Finally, the authors present a new, generalized intra-prediction framework for the HEVC standard, which unifies the directional prediction methods used in the current video compression standards, with linear prediction methods using sparse constraints. Experimental results for the compression of natural images are provided, demonstrating the advantage of the unified prediction framework over the traditional directional prediction modes used in HEVC standard.

Real-Time Video Compression

Real-Time Video Compression PDF Author: Raymond Westwater
Publisher: Springer
ISBN: 0585323135
Category : Technology & Engineering
Languages : en
Pages : 164

Get Book Here

Book Description
Real-Time Video Compression: Techniques and Algorithms introduces the XYZ video compression technique, which operates in three dimensions, eliminating the overhead of motion estimation. First, video compression standards, MPEG and H.261/H.263, are described. They both use asymmetric compression algorithms, based on motion estimation. Their encoders are much more complex than decoders. The XYZ technique uses a symmetric algorithm, based on the Three-Dimensional Discrete Cosine Transform (3D-DCT). 3D-DCT was originally suggested for compression about twenty years ago; however, at that time the computational complexity of the algorithm was too high, it required large buffer memory, and was not as effective as motion estimation. We have resurrected the 3D-DCT-based video compression algorithm by developing several enhancements to the original algorithm. These enhancements make the algorithm feasible for real-time video compression in applications such as video-on-demand, interactive multimedia, and videoconferencing. The demonstrated results, presented in this book, suggest that the XYZ video compression technique is not only a fast algorithm, but also provides superior compression ratios and high quality of the video compared to existing standard techniques, such as MPEG and H.261/H.263. The elegance of the XYZ technique is in its simplicity, which leads to inexpensive VLSI implementation of any XYZ codec. Real-Time Video Compression: Techniques and Algorithms can be used as a text for graduate students and researchers working in the area of real-time video compression. In addition, the book serves as an essential reference for professionals in the field.

Document and Image Compression

Document and Image Compression PDF Author: Mauro Barni
Publisher: CRC Press
ISBN: 1420018833
Category : Technology & Engineering
Languages : en
Pages : 456

Get Book Here

Book Description
Although it's true that image compression research is a mature field, continued improvements in computing power and image representation tools keep the field spry. Faster processors enable previously intractable compression algorithms and schemes, and certainly the demand for highly portable high-quality images will not abate. Document and Image Compression highlights the current state of the field along with the most probable and promising future research directions for image coding. Organized into three broad sections, the book examines the currently available techniques, future directions, and techniques for specific classes of images. It begins with an introduction to multiresolution image representation, advanced coding and modeling techniques, and the basics of perceptual image coding. This leads to discussions of the JPEG 2000 and JPEG-LS standards, lossless coding, and fractal image compression. New directions are highlighted that involve image coding and representation paradigms beyond the wavelet-based framework, the use of redundant dictionaries, the distributed source coding paradigm, and novel data-hiding techniques. The book concludes with techniques developed for classes of images where the general-purpose algorithms fail, such as for binary images and shapes, compound documents, remote sensing images, medical images, and VLSI layout image data. Contributed by international experts, Document and Image Compression gathers the latest and most important developments in image coding into a single, convenient, and authoritative source.