`Coding for D
`.1gi
`1mm.
`Communicat
`0I1
`
`ii
`
`Apple 1021
`
`
`
`Applications of Communications Theory
`Series Editor: R. W. Lucky, Bell Laboratories
`
`INTRODUCTION TO COMMUNICATION SCIENCE AND SYSTEMS
`John R. Pierce and Edward C. Posner
`
`OPTICAL FIBER TRANSMISSION SYSTEMS
`Stewart D. Personick
`
`TELECOMMUNICATIONS SWITCHING
`J. Gordon Pearce
`
`ERROR—CORRECTION CODING FOR DIGITAL COMMUNICATIONS
`
`George C. Clark, Jr., and J. Bibb Cain
`
`Melboume, Flori
`
`A Continuation Order Plan is available for this series. A continuation order will bring
`deliver)’ of each new volume immecliately upon publication. Volumes are billed only upon
`actual shipment. For further information please Contact
`the publisher.
`
`ii
`
`
`
`Error- Correction
`Coding for Digita
`Communications
`
`George C. Clark, Jr.
`and
`
`J. Bibb Cain
`
`Harris Corporation
`Melbourne, F7orz'da
`
`PLENUM PRESS 0 NEW YORK AND LONDON
`
`iii
`
`
`
`Library of Congress Cataloging in Publication Data
`Clark, George C. (George Cyril), 1938-
`Error—correction coding for digital communications.
`Bibliography: p.
`Includes index.
`
`1. Data transmission systems. 2. Error-correcting codes (Information theory). I
`Cain, J. Bibb. II. Title.
`TK5102.5.C52
`ISBN 0306406152
`
`C81-1630
`AACR2
`
`621.38’0413
`
`
`
`Preface
`
`Error—correcti0n c
`new comrnunicatic
`
`increase the energy
`also providing in
`problems. Among
`caused by filterin
`certain frequency
`coding provided b
`rnerous articles hav
`
`required to evalua
`countered in practi
`reports.
`This book is a
`
`for the design engin
`and for the comm
`equipment into a s
`graduate text for 2
`The book use
`V
`3
`I
`1,
`C ‘“”'“‘‘I t.h°°rem/.p
`ever possible heuris
`b
`drawin
`analo
`‘
`Y
`E
`math?m"mCaI_ mg?“
`Slimdlflg. coding IS
`impossible task to
`at an‘ The assumptl
`
`t.
`[(ilD9‘8_l'Pleni;i;1Press, l}:Ie:I'3]{]ork C
`ivision o
`enum u
`is mg orpora ion
`233 Spring Street, New York, N.Y. 10013
`All rights reserved
`
`No part of this book may be reproduced, stored in a retrieval system, or transmitted,
`in any form or by any means, electronic, mechanical, photocopying, microfilming,
`recording, or otherwise, without written permission from the Publisher
`Printed in the United States ofAmerica
`
`’
`
`_
`
`iv
`
`
`
`1 0 Fundamental Concepts of Coding
`Chap.
`6
`differ in two positions, etc. As in the simple example given previously, there
`'
`tterns that are left over after assigning all
`'nequality).
`
`there are 2" possible sequences. Each column ofthe decoding table contains
`N9 of these sequences so that the number ofcode words, NC, must obey the
`inequality
`
`(1-3)
`NCS2"/[1+n+<’21>+---+(:>]
`This is called a Hamming bound or “sphere—pacl<ing" bound. The equality
`in this bound can be achieved only for so-called pezj/"eel codes. These are
`codes which can correct all patterns oft or fewer errors and no others.
`There are only a small number of perfect codes which have been found and
`consequently the equality in (1-3) is almost never achieved.
`At the encoder we envision a process by which a k-symbol information
`sequence is mapped into an n-symbol code sequence. Although the termi-
`nology is usually restricted to the so—called linear codes (to be discussed) n
`we shall refer to any such mapping as an (n, k) code. Since the k-symbol
`sequence can take on 2" distinct values, inequality (l-3) can be written
`2"£2"z [l+n+<
`
`(1-4)
`
`A measure of the efficiency implied by a particular code choice is given by
`the ratio
`
`that are redundant is l —— R.
`The mapping implied by the encoder can be described by a look—up
`table. For example, the four-word code discussed previously is described in
`Table 1-2. The portion of the code sequence contained between the dashed
`lines is identical to the input sequence. Thus, each code sequence is easily
`and uniquely related to the input. Not all block codes exhibit this property.
`Those which do are referred to as systematic codes. For systematic codes.
`the concept of redundant digits becomes very clear and in Table 1-2 consists
`of the digits in positions 1, 4, and 5. Conversely. codes which do not exhibit
`this property are called nonsystematic codes.
`
`Sec. l.l. 9 Basic
`
`Many go
`permit the co
`
`remarkable imp
`to generate an
`relatively strai
`of length 40 th
`ing up to fou
`reveals that th
`than 10"“. Ift
`
`of increasing t
`going to a so
`averaging. In e E
`Both options.
`tives.
`
`Before pr
`practical impo
`for many years
`scheme for cor
`
`(in this case I/i
`made arbitrari
`
`Unfortunately.
`procedures enc
`ratio r/n at the
`(or equivalently
`the relative nut
`
`vanishingly sma
`was given by J
`construct a clas
`
`scribed above)
`the authors” kn
`real communica
`
`
`
` Decoding
`
`Sec. 6.1. 0 Binary Rate-1/2 Convolutional Codes
`
`229
`
`as T1(x) = I(x)g, (x), where the polynomial multiplication is carried
`out in GF(2).
`The code used in this example is a nonsystematic code because neither
`T1(x) nor T2(x)
`is
`identically equal
`to the information sequence I(x).
`Conversely, a systematic code would have either g1(.\') = 1 or _q2(.\') = 1
`so that
`the information sequence would appear directly in the output
`sequence. For reasons which will subsequently become clear, nonsystematic
`codes are usually preferred in systems which utilize Viterbi decoding.
`As one might suspect from the name, the output of the encoder can
`also be thought ofas the convolution ofthe impulse response of the encoder
`with the input sequence. For example, ifa single one followed by all zeros
`[I(x) = 1]
`is shifted into the encoder of Fig. 6-1, the resultant output
`sequence is 1
`1 01 1 1 000 0 The reader may verify that the response to
`any arbitrary input sequence may be produced by adding modulo-2 appro-
`priately shifted versions of this sequence. This suggests that a generator
`matrix, G, can be constructed as
`
`110111000000
`001101ll000000...
`
`G: 0000110111000000...
`
`(6-1)
`
`In contrast with block codes, convolutional codes do not need to have
`
`i.e., the code words may have
`the information sequence in blocked form,
`infinite length. Thus, the generator matrix as shown in (6-1) is sometimes
`said to be a semi-infinite matrix. The code word corresponding any input
`sequence X may be found by multiplying the input vector by G, i.e.,
`
`Y = x’’‘(;
`
`(6-2)
`
`
`
`ire of a
`irovides
`rmance.
`
`gorithm
`ful per-
`irn will
`trdware
`rmance
`
`k-stage
`)dulo—2
`)nvolu—
`1 at the
`adders
`
`register
`nerator
`
`x + x2
`t-order
`
`3 input
`series
`
`:0 or 1)
`ie con-
`
`tion of
`
`output
`pressed
`
`is obtained
`1 0 0 0
`1
`Thus, the output sequence corresponding to X7" = 1
`byaddingrows1,2,and 3ofGtogiveY"i=11l0 01101100 00....
`Another convenient method of describing the relationship between
`input and output sequences is a code tree such as shown in Fig. 6-2. Each
`branch of the tree represents a single input symbol. The convention used
`is that an input 0 corresponds to the upper branch and an input
`1
`corresponds to the lower branch. Thus, any input sequence traces out a
`particular path through the tree. Specifically, a 10110 input sequence
`causes a 11 01 00 10 10 output sequence. This path is indicated in Fig.
`6-2. Ofcourse, as the input sequence grows in length, the number of possible
`paths grow exponentially, thus limiting the role of such a tree diagram
`to a conceptual one.
`Fortunately, the exponential growth of the code tree can be “tamed”