throbber
NFORMATION AND CONTROL 37, 150-177 (1978) Optimal Encodings of Linear Block Codes for Unequal Error Protection LARRY A. DUNNING Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27607 AND W. E. ROBBINS Department of Computer Science, North Carolina State University, Raleigh, North Carolina 27607 It is possible for a linear block code to provide more protection for selected message positions than is guaranteed by the minimum distance of the code. The protection provided a message position can be measured by associating a number with thatposition called its separation. The separation of a message position mea- sures the protection provided to that position in a manner analogous to that in which the minimum distance of a code measures the protection provided the en- tire message. This paper proves that any fixed linear block code has an encoding which is optimal with respect to the error protection provided the individual message positions. More precisely, among those encodings of the code for which the separations associated with the message positions are arranged in nondecreasing order, there is at least one which simultaneously maximizes all the separations associated with the message positions. A procedure is given which may be used to construct optimal encodings for linear codes of small dimension. When the Hamming metric is employed, the procedure builds a generator matrix which is as sparse as possible for the given code. At each iteration the procedure adds a row to a partially constructed generator matrix. A code word of minimum weight is chosen for this purpose---subject to the restriction that the rows of the generator matrix must be linearly independent. A more general result is that any generator matrix which is as sparse as possible induces an optimal encoding of its row space. A similar result holds when the Lee metric is used to model a channel. Theorems dealing with cyclic codes and product codes are developed. Under suitable restrictions, an optimal generator matrix for a cyclic code may be formed by concatenating the generator matrices of the minimal ideals which are contained in it. When the Hamming metric is employed, an optimal generator matrix for a product code may be obtained by taking the Kronecker product of optimal generator matrices for the component codes. 150 0019-995817810372-0150502.0010 Copyright © 1978 by Academic Press, Inc. All rights of reproduction in any form reserved.
`
`IPR2018-1556
`HTC EX1053, Page 1
`
`

`

`ENCODING FOR UEP 151 1. INTRODUCTION AND PRELIMINARIES We shall restrict our attention to (n, k) block codes. Let F A GF(q) be any finite field where q is a prime power. Throughout this paper, an (n, k) code will be a subset off n of cardinality qk where k ~< n. By an encoding of a code, C, we mean any bijection ~7:F~--+ C. If C is a vector subspace of F n, then it is said to be a linear code. In this case we will say that a k × n matrix with entries from F is a generator matrix for C if its rows form a basis for C. There is a natural one-to-one correspondence between the linear encodings and the generator matrices of a linear code. Every generator matrix, G, induces a linear encoding, L, defined by the formula, L(m) ~ mG Vm cF ~, where the message vector, m, and all vectors throughout the paper are identified with row matrices. For most applications of block codes, it is sufficient to study codes without reference to their encodings. However, this has not always been the case. The construction of codes in which some message positions might be provided protection against a greater number of errors than others has been considered by several authors (Masnick and Wolf, 1967; Gore and Kilgus, 1971; Kilgus and Gore, 1972a; Mandelbaum, 1972). Masnick and Wolf (1967) proved that cyclic codes in systematic form provide equal error protection for every informa- tion digit. A nonsystematic cyclic code which provides one "information digit" protection against errors, beyond that guaranteed by the minimum distance of the code, was exhibited by Gore and Kilgus (1971). Thus, it became apparent that the p~oteetion against error afforded individual message positions depends not only on the code used, but also upon the encoding used. A direct means of establishing this result is to inspect the mappings ~71,72: GF(2) ~--~ GF(2) 4 given in Table I. ~/1 and ~2 are two different encodings for the same code. Given a received word containing at most a single error, one can determine whether the code word originally transmitted was of the form ~l(ml, 0) or of the form -ql(ml , 1). Thus, the encoding, ~71, allows determination of the second message bit, rn 2 , despite any single error. However, consideration of the received word 1000 shows that the encoding ~2 fails to protect either message bit against all single errors. TABLE I ~.(ml , m2) ~1(ml , 0) ~1(ml, 1) ~2(rnl, O) ~/2(ml, 1) mx = 0 0000 0111 0000 0111 ml 1 1100 1011 1011 1100
`
`IPR2018-1556
`HTC EX1053, Page 2
`
`

`

`152 DUNNING AND ROBBINS One place where unequal-error-protection codes were expected to find applica- tion was in the transmission of digital telemetry data. Here it may be desirable to give high order digits moie protection than low order digits. Calculated data for several such codes so employed were given by Kilgus and Gore (1972b). Recently, several papers (Crimmins, 1976; Crimmins et al. 1969; Crimmins and Horowitz, 1970; Redinbo, 1976; Redinbo and Wolf, 1974; and Wolf and Redinbo, 1974) studying the mean-square-error protection afforded numeric data by block coding schemes have appeared. The approach in these papers is not to construct codes, but rather to find optimal encoding 1 and decoding schemes for a fixed linear code. Crimmins et al. (1969) gave a restricted formulation of the problem in which each encoding of a binary linear code generates a decoding scheme in a prescribed manner. They gave a procedure for finding linear encodings, which are optimal in the set of all encodings, linear and nonlinear, of the fixed binary linear code under consideration. Our purpose is to investigate the encodings of a fixed linear code. However, we shall use the unequal-error-protection approach to evaluate and compare these encodings instead of the mean-square-error evaluation. The mean-square- error evaluation method of Crimmins et al. (1969) associates a nonnegative real number with each encoding. Since each code has only finitely many possible encodings, one of the encodings must have mean-square-error as small (good) as possible for the code. Thus, it is immediate that every code has an encoding which is optimal with respect to the mean-square-error evaluation. Using the unequal-error-protection approach we will prove that optimal encodings exist for linear codes. In doing this a procedure will be found for obtaining an encoding which is optimal in the set of all encodings, linear and nonlinear. This result parallels that of Crimmins et al. (1969), and the procedure found is similar to theirs. Further, when the encodings of a linear code are evaluated using a measure of unequal-error-protection based on either the Hamming or the Lee metric, the procedure will yield a linear encoding which is optimal among all encodings of the fixed linear code under consideration. In these cases, any generator matrix which has minimal Hamming or Lee weight, respectively, among all generator matrices for its row space, induces an encoding which is optimal for its row space. M asnick and Wolf (1967) assign each information position an error protection level. Under this scheme, if an information position has error protection level, f, and not more than f errors occur in the reception of a code word, then the original value of the position in question can be determined correctly even though it may be impossible to determine the entire code word correctly. Instead of using this generalization of the error correcting capability of a code, we employ 1 When numeric data are encoded using a 1-1 mapping (e.g., Crimmins et al., 1969) from {0, 1,..., 2 k -- 1} onto a code, we identify these integers with their binary representa- tions (following Mitryayev, 1963) to obtain an equivalent encoding.
`
`IPR2018-1556
`HTC EX1053, Page 3
`
`

`

`ENCODING FOR UEP 153 a generalization of the minimum distance of a Nock code. Given an encoding of a block code, for each message position we will define an associated separation, which is related to its error protection level in the same manner that the minimum distance of a block code is related to its error correcting capability. Encodings which we find to be optimal will necessarily be optimal with respect to their error protection levels. Block codes may be used to detect errors, correct errors, fill in erasures, or combinations of these things. Fortunately, one parameter, minimum distance, suffices to measure the capabilities of a block code regardless of the type of protection desired--provided that one stays within the list given. However, different decoding algorithms are used depending on the task at hand. Given a particular encoding, the separation associated with a message position measures the capability of a block code to detect errors which may cause that position to be in error, determine that position despite errors, determine that position despite erasures, or combinations of these things in an analogous manner. The decoding algorithms, given later, differ very little from those used when all positions receive the same protection. Depending on the types of protection desired, the message positions may be decoded separately or as a unit. Treating the message as a unit will not necessarily preclude giving different positions varying degrees of protection. As was mentioned before, the protection provided to the message positions depends upon the encoding as well as the code. The generator matrices and encodings of interest are frequently nonsystematic. That is, the message positions may not appear explicitly in the code words. We will not be able to make reference to "the information positions" of the code word. Before an encoding function can be used, an inverse mapping must be constructed for use as a part of the decoding rule. When we speak of choosing an optimal encoding, we will also be choosing decoding rules which will depend both on the encoding and on the type of error protection desired for each message position. In order to handle the Hamming and Lee metrics simultaneously, we will develop results with respect to a function, W:-Fn-+ ~, which has the property that the function d: F ~ × F n --+ IR given by d(x, y) £ w(x - y) is a metric. Such a function, % will be called a weight function. We will have occasion to refer to the Hamming weight function specifically and will denote it by h. One can easily verify that necessary and sufficient conditions for a function, ev: F n --, ~, to be a weight function are that for all x, y eF n (i) w(x) = 0 if and only if x 0n, (ii) w(x) = w(--x), (iii) w(x q- y) <~ w(x) q- w( y), where 0n denotes the zero vector in F n.
`
`IPR2018-1556
`HTC EX1053, Page 4
`
`

`

`154 DUNNING AND ROBBINS Suppose X _C F". It will be convenient to abbreviate w[q)] = -t- o% else w[X] A min w(x). This is not to be confused with the usual conventions for extending point functions to sets, i.e., for example, w(X) A= {w(x): x ~ X}. Since we will be dealing with linear codes, which are the row space of their generator matrices, it is desirable to develop some notational devices to assist in arguments involving the rows of a matrix. Given a k × n matrix M with entries in F, we denote the entry in row i and column j by Mij , the ith row (vector) by Mi. and thejth column (vector) by M.j. The set of all rows of M is denoted by M, =~ {Mi. ,..., M~.}. For any function, f: F n -+ S, where S is any set, define f,.(M) £ " \f(M>)/" Thus, fr(M) c S ~ is a vector whose ith component is found by applyingf to the ith row of M. Our main line of argument will require only some elementary knowledge of linear algebra. When listed, vectors will always be enclosed in parentheses and matrices in brackets. In our discussion of product codes, some properties of the (left) Kronecker product of matrices will be required. In particular, recall that (A ~ B)(C G D) = AC @ BD. We shall take Kronecker products ovei bothF and g¢ and will denote the respective operators by (~F and G~ • The Hamming weight function, h, applied to a matrix will count the number of nonzero entries in that matrix. The Lee weight function applied to a matrix will add the Lee weights of its entries. It is easy to show that given two vectors a EF 1~, b ~F ~ their Hamming weights are related by h(a (Dr b) = h(a)h(b). (1) This result may be used to prove that, given any two matrices A, B with entries in F, h~(A (~F B) -~ h~(A) @~ h~(B). (2) We will denote the span operator by <'}. Given a set S of vectors taken from a finite vector space, V, over the field F,
`
`IPR2018-1556
`HTC EX1053, Page 5
`
`

`

`ENCODING FOR UEP 155 is the subspace consisting of all linear combinations of elements of S. (S} is the smallest subspace of V containing S. According to convention, (~} ~- {0} is the subspace consisting of only the zero vector. Set brackets may be omitted when taking the span of a list of vectors, e.g., (a, b, c} = {{a, b, c}}. 2. THE MINIMUM SEPARATION APPROACH One possible expression for the minimum distance, d, of a code, C, with encoding '7: FI: -~ C is d = w[{c -- c': c 5 a c'}] ~- w[{-q(m) -- ~)(m'): m -/= m'}] where c, c' range over C and m, m' over F k. We make a definition that is syntac- tically similar. DEFINITION 2. Given an encoding, 7, of an (n, k) code, C, the separation vector of ~) with respect to a weight function, w, is denoted by S~(~) ~ Nk and is defined by: s.(~)~ & w[{,(m) - ~(m'): m~ m,'}] (i = L..., k), (3) where m and m' range over F 1~. When no confusion will result, subscripted references to weight functions may be dropped. If L is the encoding induced by the generator matrix, G, then we may write S(G) or Sw(G) instead of S~o(L). We will always subscript h for emphasis when a separation vector is taken with respect to the Hamming weight function. It is easily shown that the minimum distance, d, and any separation vector, S(*/), of a code are related by d = min{S(~7) ~ ,..., S(~/)k}. The ith component of the separation vector is used to guarantee protection for the ith message position. Since correct determination of all message positions is equivalent to correct determination of the entire message, the last equation should not be unexpected. THEOREM l. Given an encoding, ~, for an (n, k) code, (a) ~ allows detection of any error pattern of w-weight not more than d at the ith message position if and only if d < S(v)i . (b) ~7 allows correction of any error pattern of w-weight not more than t at the ith message position if 2t < S(~)i . (c) ~7 allows correction of any error pattern of w-weight not more than t and simultaneous detection of any error pattern of w-weight not more than d ~ t at the ith message position if t + d < S(~)i .
`
`IPR2018-1556
`HTC EX1053, Page 6
`
`

`

`156 DUNNING AND ROBBINS Proof of (b). Suppose that m ~F I~ is a message and that c ~ ~7(m) + C is transmitted through a channel. Let e E F n be any error pattern of w-weight not more than t. If c is perturbed by e in passage through the channel, then the received word is given by r ~ c -~ e. Consider the following decoding procedure. Maximum likelihood decoding procedure. (1) Find any code word c' which is as close to r (with respect to the metric induced by w) as any other code word. (2) Set m' = ~-1(c') and guess m i =- mi'. Clearly w(c' -- r) <~ t since c' must be at least as close to r as c is. Denote m' ~ V-l(c'). Now, m i is given correctly by mi ~- mi'. If this were not the case, then a contradiction arises since using Eq. (3) we obtain s(n)~ ~< w(n(m) - ~(m')) = ~o(c - c') < w(c - r) + w(r -- c') ~< t + t < S(n)~. Proof of (c). Suppose that a message m ~ F k is encoded to give a code word c G ~(m) which is perturbed by an error pattern e ~F n to yield a received word r ~ c @ e. Consider the following decoding procedure (Wyner, 1965). Bounded distance decoding procedure. (1) Find any code word c' which is as close to r as any other code word. (2) If w(c' -- r) <~ t find m' & ~-1(c'), guess m i ~- mi' , and stop. (3) Otherwise, declare that the ith message position has a detected error. This is an extension of the maximum likelihood decoding procedure used in the proof of (b). Since 2t ~< t -~ d < S(~7)i, we already know that mi will be determined correctly by this scheme whenever w(e) <~ t. We must show that if w(e) <~ d, then the algorithm will either determine mi correctly or declare a detected error. In fact, assume to the contrary that w(e) <~ d and m~:' =/= mi is computed at step (2). Then, using Eq. (3), a contradiction arises since S(v)i <~ w(v(m) -- ~7(m')) <~ w(c -- r) + w(r -- c') <~ d@ t < S(~))~. Proof of (a). Sufficiency may be obtained by taking t = 0 in (c). For the necessity let m, m' ~F k satisfy m i ~ m i' and S(~/) i = w(~7(m ) -- ~?(m')). Taking c ~ v(m'), e (cid:127) ,/(m) -- ~)(m'), and r & c -- e, we see that d < S(~))i. Q.E.D. Let V be an encoding of an (n, k) code, C. For the Hamming weight function, part (b) of the last theorem implies that for any i ~ {1,..., k} the ith message digit is protected against t errors whenever 2t + 1 ~ $1~(~7)i. Thus, the "error protection level" associated with the ith position is lower bounded by, f, /~ [ &(v)~-I ] = 2 ' (4)
`
`IPR2018-1556
`HTC EX1053, Page 7
`
`

`

`ENCODING FOR UEP 157 where ['j denotes the greatest integer function. In fact, the "error protection level" associated with the ith position is given exactly by (4); i.e., the value of the ith message position can be determined despite any occurrence off errors, but not despite any occurrence of f 4- 1 errors. This relates our evaluation system to that of Masnick and Wolf (1967). To prove the reverse inequality, consider any i~{1,..., k}. There exist m, m' ~ F k satisfying mi v L mi' and &(~)i = w(~(m) - ~(m')). Note thatfis the largest integer satisfying 2f + 1 ~ Sh(~)i. Set J ~= {j: ~(rn); ¢ ~(m')j, 1 ~< j ~ n}, and choose E _C J satisfying I E [ = f q- 1 ~< I J ] Sh(~?) i where [ • I signifies cardinality. Define e, e' eF ~ by e, za I O' , j ¢ E, , /. t~l(m)j - ~)(m')~ , j ~ J\E, = f~(m )j - ~(m)j, j ~ E, ej = tO, j ¢ J\E, where \ denotes set difference. Since 2(f + 1) >~ St~(~7)i it follows that the number of nonzero components of e' is at most f + 1. Now, suppose that the received word, r, given by r A ~(m) + e = ~(m') + e' is encountered. Given only that no more than f-? 1 errors occurred, there are at least two possibilities. (1) The original message was m and error pattern e occurred (f + 1 errors). (2) The original message was m' and error pattern e' occurred (~f@ 1 errors). The values of the ith message positions, m i and m(, in these cases are different, which completes the proof. The procedure(s) given in the proof of Theorem 1 can be used to guarantee protection for individual-message positions even when an encoding-decoding scheme handles the entire message as a unit. We note two special cases. Suppose C to be a linear (n, k) code with generator matrix, G, and parity check matrix, H. Since G has maximal row rank, there exists an n × k matrix, G-, which is a right inverse of G and satisfies G " G- ~- I~ where I k is the k × k identity matrix. G- gives us a representation of the inverse of the encoding induced by G. Now, the scheme depicted in Fig. 1 will either give an error indication or the correct value for the ith message position in its output whenever w(e) < S(G)e; and th~s statement is true for each i ~ {l,..., k}. For each s eF n-a~ set V~ ~ {v cF~: vH t = s},
`
`IPR2018-1556
`HTC EX1053, Page 8
`
`

`

`158 DUNNING AND ROBBINS and choose l(s) ~ 1~ satisfying ~(l(s)) = w[v~]. Thus, for each syndrome, s, we have chosen a minimum w-weight coset leader, l(s), of its associated coset, I/. Now, the scheme depicted in Fig. 2 determines a reconstructed message m' ~= (r -- l(rH~)) G-. The value of the ith position of the reconstructed message is correct whenever 2w(e) < S(G)I , and this is true for each i ~ {1,..., k}. ~.oo,,...~,,,.,o,,,.,,,.° o°.oo,,®o.,•~ if rHtTm0 FIa. 1. An error detection scheme for linear codes. .,®,,,,,,.,,,,,luoIgs*e*l.oeo o*=ee~ :Decoder c--mC r=C+~ Slepian c" ]c'G-R Encoder Corrector ! : c'=r-Z (rH) : FIG. 2. A minimum distance decoding scheme for linear codes. While other results in this direction are possible, we shall be content to state one more. Its proof may be constructed from the proof of the parallel result for ordinary minimum distance by generalizing in the same manner as we did for the previous theorem. THEOREM 2. Given an encoding, ~l, for an (n, k) code, C, ~ allows determination of the ith message position despite any simultaneous occurrence of not more than t errors and not more than e erasures if and only if 2t + e ~- 1 <~ Sh(~?)i . If ~/: F 1~ ~ C is an encoding and 4:{1 ..... k} --+ {1 ..... k} is a permutation, then define another encoding, ~/~, for C by ~%(m) zx ~?(m o 4 -1) where o denotes composition of functions; i.e., for m = (m 1 ..... ink) cF ~', ~(m,~(~) ,..., m~(~)) = ~(m~ ,..., m~). (5)
`
`IPR2018-1556
`HTC EX1053, Page 9
`
`

`

`ENCODING FOR UEP 159 Now, given any i ~ {1,..., k} it follows that (for any given weight function) S(~7~)i = w[{~(m) -- %(m'): mi ;~ mi'}] = w[{~(m o ~-~) - ~(m' o +-~): m~ -¢ m/}] = wife(m) -- ~(m'): (too ~), ~. (m'o +)~)1 where m and m' range over _F~ in the above expressions. We have shown S(,~) = S(~)o ~. (6) Thus, the separation vectors associated with ,/and To differ only by a permutation of coordinates. We shall regard such pairs of encodings as being equivalent. This discussion will be incorporated into a formal result after a prerequisite definition. Given x ~ ~, we define x* ~ ~ to be the vector obtained from x by permuting its coordinates to obtain a nondecreasing vector. For any x ~ Ne, there exists a permutation q~: {1 .... , k} -+ {1 .... , k} such that x* = x o q~-~ and x* ~< x~ ~<...~< x; (7) where x~* is interpreted as (x*)~. Thus, x = x* if and only if x is nondecreasing. PROPOSITIOS~ 1. Given an encoding, 7, of an (n, k) code, C, there exists another encoding, ~, of the same code such that s(~) = s(~)* = 8(7)*. If On e C then ~ may be chosen so as to satisfy ~(0~) ~ On. Proof. Given V, choose a permutation 6: {1,..., k} --~ (1 .... , k} satisfying S(~7)* ~ SO? ) o6. We have already shown that the encoding V~ defined by ~,(m) ~ ~?(m o 6-1) and satisfying (5) has separation vector If 0n~C, set~ A ~b. If0-£eC, thensetz A -1--. = = % (0~), and define ~ by ~(m) A r/,(z -- m) Vm ~ FL Clearly ~(0~) -- W(z -- 0e) = ~7~(z) = On. The separation vectors associated with ~)~ and ~ are again the same since for i = 1,..., k, s(~)~ = ~[{~(m) - ~(m'): m~ ~ m/)] = 'ZO[{~]¢(~ -- m) -- -q$(z -- mr): m i =/= mi'}] = w[(~,(m) - v,(m'): (~ - m), ¢ (~ - m'),)] = g0[(~7d)(m ) -- Vd~(m'): m i =/= m/t}] = S(v~)~ S * = (v)~,
`
`IPR2018-1556
`HTC EX1053, Page 10
`
`

`

`160 DUNNING AND ROBBINS where m and m' range over F k in the above expressions. In either case, ~ ~ ~1¢ or f(m) =~ ~],(z -- m); our proof is complete with the observation that S(~) = 8(~1¢) = SO))* = (801)*)* = S(~)*. Q.E.D. Some simplification of the expression for the separation vector is possible for linear encodings. Suppose G is a generator matrix for an (n, k) code C, then for each i = 1,..., k S(O)i ~- w[{mG -- re'G: m~ #= m/}] w[{(m -- m')G: (m -- m')i :/: 0}] ----- w[{mG: mi :/: 0}] ----- w[C\{mG: mi ---- 0}] ~- w[C\<G~\G{.>]; where we have written "Gi." for the singleton set {G{.} and m and m' range over F k in each expression. If S(G) is nondecreasing, then we can show that for each i e {1,..., k} S(G)i = w[{mG: m s ~ 0 for some j/> i}]. That S(G)i is not less than the right-hand expression follows from {raG: miv a 0} C {raG: mj =/= 0 for some j >/i}. The reverse inequality is true since if m ~F k and mj 4= 0 for some j >/i, then S(G)~ ~ "" ~ S(G)j <~ w(mG). Another expression for the right-hand side is w[C\<G 1. ,..., G(~_l).>]. We have shown the following proposition. PROPOSITION 2. Given a generator matrix, G, for a linear (n, k) code C, the following expressions are equal for any fixed i ~ {1,..., k}: (a) s(c),, (b) W[{mG: m~ ~ 0}], (c) w[C\<G\G.>]. If S( G) is nondecreasing, then the following expression is equal to each of the preceding for any fixed i E { 1 ..... k}: (d) w[C\<Gl. ,..., G(,_I).>]. The following corollary is an immediate consequence of either Proposition 2, (a) = (c) or of Eqs. (5) and (6). COROLLARY 1. Let ¢: {1 .... , k} ~ {1 ..... k} be a permutation. If G is a k × n generator matrix, then \ LG¢(k)._I \B((~)~(~)l
`
`IPR2018-1556
`HTC EX1053, Page 11
`
`

`

`ENCODING FOR UEP 161 Before turning to our main results we shall develop one more result in which the separation vector plays a role analogous to that usually played by the mini- mum distance of a code. Let A be a generator matrix for a linear (nl, h~) code, (71, over GF(q), and let B be a genexator matrix for a linear (na, ha) code, C2, over GF(q). Then the Kronecker product A 11 B A@FB zx i LA~ ~B "'" AI~IB ] • .. A~,~IB ] ' of A and B forms a generator matrix for a linear (n 1 • na, kl " ha) code C. C depends only upon C 1 and C a , not upon the particular choice of A and B, and is called the Kronecker product of the codes C 1 and C a (e.g., see Section 1.5 of Blake and Mullin, 1975). Another form of the following theorem was stated by Kilgus (1971) for the case in which both C 1 and C2 are majority logic decodable. THEOREM 3. Let A be a generator matrix for a linear (n 1 , hi) code and let B be a generator matrix for a linear (ha, ka) code, both over the same finite field, F. ~rVhen ' Sh(A @F B) -~ Sa(A) @~ S~(B). hoof. For convenience denote h = h 1 " h 2 . Given v ~ F k or v ~ Nk, define f(v) = I1 V 1 •.. Vk8 ~)k!+l """ V2k2 . L ~d/c_/c2+l •.. ~dk Given any m EF ~, the reader may verify that m(A @F B) =- ((A~f(m)B)l. ,..., (A~f(m)B),r). Thus h(m(A @F B)) = h(A~f(m)B). Now it may be verified that f(Sh(A @F B)~j = h[{A*MB: M~ va 0}], where M ranges over all k 1 × k 2 matrices with entries in F; and that f(Sh(A) @~ Sh(B))ij = S~(A),SI~(B)j, for i, j satisfying 1 ~< i ~< kl, 1 <~ j ~< k 2 . We will prove that Sh(A), . Sh(B)j <~ h[(AtMB: M~ =/= 0}1, (8) 643[37]2-4
`
`IPR2018-1556
`HTC EX1053, Page 12
`
`

`

`162 DUNNING AND ROBBINS which will imply that f(S~(A) @~ S,~(B)) <~ f(Sh(A @F B)) and hence that &(A) ®~ &(B) ~ &(A ®~ B). At this point, we have merely converted the statement of the problem to the direct product representation. Note that the rows of AtMB are always code words in C2, and the columns of AtMB = [(MB)tA] t are always code words in C 1 . We shall now parallel the proof usually employed (e.g., see Theorem 5.3 of Peterson and Weldon, 1972) to show that the minimum distance of the direct product code is the product of the minimum distances of the component codes. Let integers i, j satisfying 1 ~ i ~ k 1 and 1 ~ j ~ k 2 be given. Suppose a k 1 × k 2 matrix M with entries in F is given and Mij =7/= O. Since the jth position of Mi. is nonzero, it follows that (Mi.)B = (MB)i. is a code word in Cu with at least Sn(B)~ nonzero entlies. Abbreviate l ~ Sn(B)j and let (MB)ij(1) ,..., (MB)ij(O be any Sn(B)~ nonzero entries of (MB)~.. Now, for k e{1,..., l}, ((MB)t)j(k). has nonzero entry ((MB)t)j(~)i = (MB)i~(~) in the /th position. Thus, [((MB)t)~(k).] • A = [(MB)tA]~(~). is a code word in C~ with at least Sn(A)i nonzero entries. At least Sh(A)i of the rows of (MB)tA have at least Sh(B)I nonzero entries; i.e., (MB)tA has at least Sn(A)i "Sn(B)~ nonzero entries. It follows that h(AtMB) = h((AtMB) ~) = h((MB)~A) ~ Sh(A)~ . Sh(B)j. Since M was an arbitrary k 1 × k s matrix with entries in F, except for the stipulation M~j :/: 0; we have shown (8). For the reverse inequality let any integer l e {1,..., k} be given. There exist unique positive integers i e {1,..., hi} andj e {1,..., k2} such that l ~ (i -- 1)ke -kj. Let a eF kl and b EF ~2 be chosen so that ai =/= O, bj :/: 0 and Sh(A)i = h(aA), Sh(B)j = h(bB). It follows that (a @F b)~ ~ 0. The inequality is at hand. For arbitrary l e {1,..., k}, we apply (1) to obtain (&(A) ®~ &(B)h = &(A)~ " &(B)~ = h(aA) .h(bB) : ---- h(aA @F bB) = h((a ®~ b)" (A @r B)) h[{m(A @F B): mz :/: 0}] = S~(A ®~ B)~ where m ranges over F k. Q.E.D. 3. OPTIMAL ENCODINGS We shall impose the usual partial order on R~; i.e., given x, y e R k, we will write x ~ y if and only ifx i ~ Yi for i = 1,..., k. Given a set of vectors A _C ~, a e A will be said to be Gale optimal in A (cf. p. 277 of LaMer, 1975) provided
`
`IPR2018-1556
`HTC EX1053, Page 13
`
`

`

`ENCOI)I~G FOR lJ~V 163 that given any other member b cA, there exists a permutation 4: (1,..., k}-+ {1,..., k} such that a o cfi ~ b. In our notation this may be formulated as a ~ ~/is Gale optimal in A if and only if for all b e A a* >~ b*. With this definition at hand we are prepared to give meaning to the term "optimal." DI~FI?qlTIO?q 2(a). Let C be a code and let ~ denote the set of all encodings for C. ~1 e ~ will be said to be an optimal encoding (for C) (with respect to the weight function w) if and only if S~(~]) is Gale optimal in S~(~). DEFINITION 2(b). If C is linear and f~ is the set of all generator matrices for C, we will say that G e fq is an optimal generator matrix (for C) (with respect to the weight function w) if and only if S~o(G) is Gale optimal in Sw(f~). To show W to be an optimal encoding, it will suffice to show that for all ~ e d ° s~(n)* > &(¢)*. To show that G is an optimal generator matrix it will suffice to show that for all d c f~ Sw(G)* >~ Sw(A)*. It is evident that ~1 a d is an optimal encoding of the code, C, if and only if S(~)* is the greatest clement of the finite partially ordered set S(d~) * = {S({)*: ~: a d~}. Since a partially ordered set has at most one greatest element, it follows that if ~ ~ N is an optimal encoding and ~ ~ 6 ~, then ~ is an optimal encoding if and only if S(~1)* = S(~)*. If C is linear, we may apply a similar argument to show that given an optimal generator matrix G a N and any other generator matrix ~/~ N, then J/ is an optimal generator matrix if and only if S(A)* = S(G)*. Suppose C is a linear code, ~? is an optimal encoding of C, and G is an optimal generator matrix for C; then S(G)* ~ S(~))* since S(~)* _C S(E)*. However, we will find that equality may not hold. When C is linear we will be able to find optimal generator matrices for C. We will then give conditions on C which will guarantee that every optimal generator matrix for C induces an optimal encoding of C. From our results, we will then deduce that every linear code has an optimal encoding. However, we will be able to give an example of a linear code, C _C F ~, whose optimal encodings all fail to be linear overF. The best one can do is to guarantee the existence of an optimal endoding which is linear over the prime subfield ofF. We now proceed with the first step of this development.
`
`IPR2018-1556
`HTC EX1053, Page 14
`
`

`

`164 DUNNING AND ROBBINS Given a linear code C and a weight function w, for each p e E U {oo}, denote CJ A {c ~ C: w(c) < p}. (9) Now suppose p ~ N and G is a generator matrix for the linear code C. Clearly CJ C_ (G~). Further, if X, Y C Gr such that Cw ° _C (X) and Cw ° C (Y), then c~. _c (x) n (g) = (xn g) follows since G, is linearly independent. Thus, there exists a smallest subset, Gw ~, of G r such that C~p C_ (GJ): i.e., (Cw °) C_ (Gw ~) where V¢ A 0 {x c a~ : c¢ 2 <x)}. (lO) When no confusion will result, the subscripted references to weight functions may be omitted. We relate the sets defined in (9) and (10) to the separation vector associated with G in the following lemma. LEMMA 1. Given a k X n generator matrix, G, for all i ~ {1,..., k}, p ~ S(G)i >~ p \if a~. ¢ G °. Proof. Fix i ~ {1,..., k}. If Gi. ¢ G o, then C ° C_ (Go> C <Gr\G~. ) implies S(G)i = w[Ci<Gf\Gi.)] ~ w[C\ C°] >/P. For the reverse implication, suppose Gi. e G °. Assume, for a moment, that C ° C C_ (G~Gi.). Since Cp C (G o) this would imply that C ° C_C_ <G o) n <G~\G~.) = <G o n (G~\G~.)) = <Go\G~.). However, this cannot be since G p is the minimal subset of G~, satisfying C ° C (GD). Thus, the assumption is in error, and C ° ~ (G,\Gi.) lest the assumption be implied. We have shown c~ n (C\<GAG~.)) V= ~. Hence, S(G)i = w[C\<G~\Gi.)] < P. Q.E.D. We now employ this lemma to obtain a basic result for linear codes. Note that w(C) ~--- {w(c): c ~ C} 5 z= w[C]. THEOm~M 4. A necessary and sufficient condition for a gener

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket