in black and white
Main menu
Home About us Share a book
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

The art of error correcting coding - Moreloz R.H.

Moreloz R.H. The art of error correcting coding - Wiley publishing , 2002. - 232 p.
ISBN 0471-49581-6
Download (direct link): artoferrorcorrecting2002.pdf
Previous << 1 .. 53 54 55 56 57 58 < 59 > 60 61 62 63 64 65 .. 86 >> Next

(c) Evaluate Lj (7.34) based on 'cMl and v{j).
(d) Update soft output values of LRP in the positions of «ml ® v(j) with Lj.
3. For K -f- 1 < j < N, choose the smallest output value associated with each LRP j.
The performance of SO-OSD is the same as max-log-MAP decoding for many binary linear block codes of length up to 128 and high-rate codes [FL5], In addition, scaling down the extrinsic values (7.34) improves the performance by a few tenths of a dB.
The Art of Error Correcting Coding Robert H. Morelos-Zaragoza Copyright © 2002 John Wiley & Sons Ltd ISBNs: 0-471-49581-6 (Hardback); 0-470-84782-4 (Electronic)
Iteratively decodable codes
Iterative decoding may be defined as a technique employing a soft-output decoding algorithm that is iterated several times to improve the error performance of a coding scheme, with the aim of approaching true maximum-likelihood decoding (MLD), with less complexity. When the underlying error correcting code is well designed, increasing the number of iterations results in an improvement of the error performance.
Iterative decoding techniques date back to 1954 with the work of Elias [Eli 1] on iterated codes. Later, in the 1960s, Gallager [Gal] and Massey [Masl] made important contributions. Iterative decoding was then referred to as probabilistic decoding. The main concept was then, as it is today, to maximize the a-posteriori probability of a symbol being sent given a noisy version of the coded sequence.
In this chapter, code constructions that can be iteratively decoded are presented. Elsewhere in the literature, these coding schemes have been named turbo-like codes. In terms of the application of iterative decoding algorithms, and for the purpose of this chapter, ECC schemes can be broadly classified into two classes:
Product codes
Examples of codes in this class include parallel concatenated codes, or turbo codes, and serially concatenated codes. Members of this class are also block product codes, presented in Section 6.2.3.
Low-density parity-check (LDPC) codes
These are linear codes with the property that their parity-check matrix has a small ratio of number of nonzero elements to the total number of elements.
In both of the above classes of codes, the component codes can be either convolutional or block codes, with systematic or nonsystematic encoding, or any combination thereof. To provide a motivation for the study of iterative decoding techniques, the fundamental structure of turbo codes is discussed next.
'Ihrbo codes
Over the past eight years, there has been an enormous amount of research effort dedicated to the analysis of iterative decoding algorithms and the construction of iteratively decodable codes or “turbo-like codes” that approach the Shannon limit1. Turbo codes were introduced in
1 The Shannon limit refers to the capacity of a discrete-input continuous output channel.
1993 [BGT]. The results reported there sent a shock wave throughout the research community.
With binary transmission over an AWGN channel, a rate-1/2 coding scheme, based on a combination of two rate-1/2 16-state RSC codes, connected with an interleaver of size 256 x 256, achieved BER rates of ICC5 at an signal-to-noise (SNR) ratio per bit, or Et,/Na, of 0.7 dB, closer than ever before to the Shannon limit (0.2 dB). See Figure 78.
theoretical (dB)
Figure 78 Performance of a rate-1/2 parallel concatenated (turbo) code with memory-4 rate-1/2 RSC codes, go = 21 and gi = 37. Block interleaver size 256 x 256 = 65536.
From [BG2]. Reproduced by permission of IEEE. (©1996 IEEE)
The basic elements in the turbo coding scheme proposed in [BGT] are the following: Coding structure
A product code (referred to in the original paper [BGT] as parallel concatenated code) structure, with constituent recursive systematic convolutional encoders.
Employ soft-input soft-output (SISO) maximum-a-posteriori (MAP) decoders for the component codes, in order to generate log-likelihood ratios.
Iterative decoding
The use of feedback of part of the symbol reliabilities, in the form of extrinsic information, from an outer (column) decoder to an inner (row) decoder, and vice versa.
Random permutation
A long random interleaver applied between the two encoders. Its main function is to ensure that, at each iteration, the component MAP decoders get independent estimates on the information symbols.
These four elements have been extensively studied over the years. [Vuc, ISTC1, ISTC2, JSAC1, JSAC2, HeW] are a good sample of the research efforts.
8.1 Iterative decoding
The purpose of this section is to introduce basic concepts behind iterative decoding. Consider the a-posteriori log-likelihood ratio (LLR)2 of an information symbol, Equation (7.12) on page 134, repeated here for convenience,
Let <7j(0) and Cfl) denote the set of modulated coded sequences x = m(v) G m(C) such that the i-th position in v ? C is equal to zero and one, respectively. Recall that modulation was a one-to-one and onto mapping from v ˆ {0,1} to a; 6 {+1, —1}, as in Equation (7.2). Then the symbol LLR (8.1) can be written as,
Previous << 1 .. 53 54 55 56 57 58 < 59 > 60 61 62 63 64 65 .. 86 >> Next