Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

The art of error correcting coding - Moreloz R.H.

Moreloz R.H. The art of error correcting coding - Wiley publishing , 2002. - 232 p.
ISBN 0471-49581-6
Download (direct link): artoferrorcorrecting2002.pdf
Previous << 1 .. 7 8 9 10 11 12 < 13 > 14 15 16 17 18 19 .. 86 >> Next

Consider a binary transmission system, with coded bits in the set {0,1} mapped onto real values {+1,-1}, respectively, as illustrated in Figure 8. In the following, vectors are
n-dimensional and the following notation is used to denote a vector: x = (xo,Xi, ? ? ? ,.xn_i).
The conditional probability density function (pdf) of the channel output sequence y, given the input sequence x is given by
TT 1 (yj-*i)2
p(y\x) = pn(y - x) = TT -y==e N° , (1.32)
t o V™
where pn(n) is the pdf of n statistically independent and identically distributed (i.i.d.) noise samples, each of which is Gaussian distributed with mean pn = 0 and variance an = N0/2, and No is the one-sided power spectral density of the noise. It is easy to show that maximum-likelihood decoding (MLD) of a linear code C over this channel selects a sequence x that minimizes the squared Euclidean distance between the received sequence y and x,
n — 1
D2(x,y) = 'Y^{xi-yi)2. (1.33)
i=0
See, e.g., [WJ], [Wil] and [BM]. It should be noted that a decoder using Equation (1.33) as a metric is referred to as a soft-decision decoder, independently of whether or not MLD is performed. In Chapter 7, soft-decision decoding methods are considered.
The probability of a decoding error with MLD, denoted Pe{C), is equal to the probability that a coded sequence x is transmitted and the noise vector n is such that the received sequence
16
THE ART OF ERROR CORRECTING CODING
Figure 8 Binary coded transmission system over an AWGN channel.
y = x + n is closer to a different coded sequence i e C, i / i. For a linear code C, it can be assumed that the all-zero codeword is transmitted. Then Pe (C) can be upper bounded, based on the union bound [Cla] and the weight distribution W(C'), as follows:
Pe(C)< V
Aw Q
l2wR
El
N0
(1.34)
where R = k/n is the code rate, E^jNo is the energy per bit-to-noise ratio (or SNR per bit) and Q(x) is given by (1.2).
Figure 9 shows the evaluation of expressions for hard-decision decoding (1.30) and soft-decision decoding (1.34) for the binary (3,1,3) code. Hard-decision decoding means using a decoder for the BSC which is fed by the outputs from a binary demodulator. The equivalent BSC has a crossover probability equal to [Pro, WJ]
P = Q
12 R
El
No
Note that in this particular case, since the code is perfect and contains only two codewords, both expressions are exact, not upper bounds. Figure 9 also serves to illustrate the fact that soft-decision decoding performs better than hard-decision decoding, in the sense of requiring less transmitted power to achieve the same Pe(C). The difference (in dB) between the corresponding SNR per bit is commonly referred to as coding gain.
In [FLR], the authors show that for systematic binary linear codes with binary transmission over an AWGN channel, the probability of a bit error, denoted Pb(C), has the following upper bound:
ft(C)< ± (1-35)
W = dmin \ /
Interestingly, besides the fact that the above bound holds only for systematic encoding, the results in [FLR] show that systematic encoding minimizes the probability of a bit error. This means that systematic encoding is not only desirable, but actually optimal in the above sense.
Example 10 Consider a linear binary (6,3,3) code with generator and parity-check matrices
G =
respectively. The weight distribution of this code is W(C) = {1,0,0,4,3,0,0}, which can be verified by direct computation of all the codewords v = (u, vp):
0 0 1 1 °\ /1 0 1 1 0 0
1 0 0 1 1 1 H= 1 1 0 0 1 0
0 1 1 0 1 Vo 1 1 0 0 1
INTRODUCTION
17
Eb/No (dB)
Figure 9 Probability of a decoding error for hard-decision decoding (Pe(3,l,3)-HDD) and soft-decision (Pe(3,l,3)-SDD) decoding of a binary (3,1,3) code. Binary transmission over an
AWGN channel.
u vp
000 000
001 101
010 Oil
Oil 110
100 110
101 Oil
110 101
111 000
In this particular case, MLD can be performed by simply computing the squared Euclidean distance, Equation (1.33), between the received sequence and all of the eight candidate codewords. The decoded codeword is selected as that with the smallest distance. In Chapter 7, efficient methods for MLD and near-MLD of linear block codes are presented. Figure 10 shows simulations and union bounds with hard-decision and soft-decision MLD decoding with binary transmission over an AWGN channel.
The flat Rayleigh fading channel model
Another important channel model is that of flat Rayleigh fading. Fading occurs in wireless communication systems in the form of a time-varying distortion of the transmitted signal. In this book, we consider the case of flat Rayleigh fading. The term “flat” refers to the fact that the channel is not frequency selective, so that its transfer function in the frequency domain is constant [BM, WJ, Pro].
As a result, a (component-wise) multiplicative distortion is present in the channel, as shown in the model depicted in Figure 11, where A is a vector with n component i.i.d. random variables a*, 0 <i<n, each having a Rayleigh pdf,
Previous << 1 .. 7 8 9 10 11 12 < 13 > 14 15 16 17 18 19 .. 86 >> Next