in black and white
Main menu
Home About us Share a book
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

- Acharya T.

Acharya T. - John Wiley & Sons, 2000. - 292 p.
ISBN 0-471-48422-9
Download (direct link): standardforImagecompressioncon2000.pdf
Previous << 1 .. 5 6 7 8 9 10 < 11 > 12 13 14 15 16 17 .. 100 >> Next

1.6.4 Coding Complexity
The coding complexity of a compression algorithm is often considered to be a performance measure where the computational requirement to implement the codec is an important criteria. The computational requirements are usually measured in terms of a number of arithmetic operations and memory requirements. Usually, the number of arithmetic operations is described by MOPS (millions of operations per second). But in the compression literature, the term MIPS (millions of instructions per second) is often used to measure the compression performance in a specific computing engine’s architecture. Especially, the implementation of the compression schemes using special-purpose DSP (digital signal processor) architectures is common in communication systems. In portable systems, this coding complexity is an important criterion from the perspective of the low-power hardware implementation.
The general model of a still image compression framework can be described using a block diagram shown in Figure 1.3.
Statistical analysis of a typical image indicates that there is a strong correlation among the neighboring pixels. This causes redundancy of information in the image. The redundancy can be greatly removed by decorrelating the image with some sort of preprocessing in order to achieve compression. In general, still image compression techniques rely on two fundamental redundancy reduction principles—spatial redundancy reduction and statistical redundancy reduction. Spatial redundancy is the similarity of neighboring pixels in an image and it is reduced by applying decorrelation techniques such as predictive coding, transform coding, subband coding, etc. The statistical redundancy reduction is popularly known as entropy encoding. The entropy encoding further reduces the redundancy in the decorrelated data by using variable-length coding techniques such as Huffman Coding, Arithmetic Coding, etc. These entropy encoding techniques allocate the bits in the codewords in such a manner that the more probably appearing symbols are represented with a smaller number of bits compared to the less probably appearing pixels, which aids in achieving compression.
The decorrelation or preprocessing block in Figure 1.3 is the step for reducing the spatial redundancy of the image pixels due to strong correlation among the neighboring pixels. In lossless coding mode, this decorrelated image is directly processed by the entropy encoder to encode the decorrelated pixels using a variable-length coding technique. In the case of the lossy compression mode, the decorrelated image is subject to further preprocessing in order to mask or throw away irrelevant information depending on the nature of application of the image and its reconstructed quality requirements. This process of masking is popularly called quantization process. The decorrelated and quantized image pixels then go through the entropy encoding process to compactly represent them using variable-length codes to produce the compressed image.
Fig. 1.3 A general image compression framework.
Multimedia data compression has become an integrated part of today’s digital communications systems—digital telephony, facsimile, digital cellular communication, personal communication systems, videoconferencing, Internet, broadcasting, etc. Other applications are voice messaging systems, image archival systems, CD-quality audio, digital library, DVD, movie and video distribution, graphics, and film industry, to name a few. New results and research concepts are emerging every day throughout the world. The number of applications will continue to grow in the days to come. As a result, it is necessary to define standards for common data compression systems specifications to make them perfectly interoperable in different systems and manufacturable platforms. We mention here some of the data compression standards for various types of multimedia data—image, video, speech, audio, text, etc.
1.8.1 Still Image Coding Standard
The two main international bodies in the image compression area are the International Organization for Standardization (ISO) and International Telecommunication Union—Telecommunications Sector (ITU-T) formerly known as CCITT. ISO deals with information-processing related issues such as image storage and retrieval, whereas ITU-T deals with information transmission. JPEG (Joint Photographic Expert Group) is the standard jointly developed by ISO and ITU-T in 1992 for still images—for both continuous-tone grayscale and color images. JPEG is officially referred as ISO/IEC IS (International Standard) 10918-1: Digital Compression and Coding of Continuous-tone Still Images and also ITU-T Recommendation T.81. There is a common misconception among many people that JPEG is a single algorithm for still image compression. Actually, the JPEG standard defines four modes of operations [13]. They are sequential DCT-based mode, sequential lossless mode, progressive DCT-based mode, and hierarchical mode. The widely used algorithm for image compression in the sequential DCT-based mode of the standard is called the baseline JPEG. The current JPEG system is targeted for compressing still images with bit-rate of 0.25-2 bits per pixel. Working group 1 in ISO is engaged in defining the next-generation still-picture coding standard JPEG2000 [17] to achieve lower bit-rates at much higher quality with many additional desirable features to meet newer challenges which current JPEG does not offer. The core coding system of the JPEG2000 (Part 1), its extension (Part 2), Motion JPEG2000 (Part 3), their conformance testing (Part 4), and some of the file formats have already been finalized as international standards. As of writing this book, the working group is currently engaged in defining some new parts of the standard.
Previous << 1 .. 5 6 7 8 9 10 < 11 > 12 13 14 15 16 17 .. 100 >> Next