throbber

`
`o0056b4init Page 1 of 180
`
`SECOND EDITION
`
`
`Gilbert Strang
`
`SAMSUNG EXHIBIT 1025
`
`Page 1 of 180
`
`SAMSUNG EXHIBIT 1025
`
`

`

`eeeat
`
`~~
`
`Page 2 of 180
`LineareaeandfePerseee
`
`
`
`Sneieanatiananttenetenieantiaeeediilineatenetjyanyopalnibenninaentnestarcaman|“aa-——aona_
`
`
`SECOND
`EDITION
`
`a
`
`it
`
`Page 2 of 180
`
`

`

`
`
`
`
`
`
`
`CopyriGHT © 1976, 1980, By ACADEMIC Press, INc.
`ALL RIGHTS RESERVED.
`
`NO PART OF THIS PUBLICATION MAY BE REPRODUCED OR
`TRANSMITTEDIN ANY FORM OR BY ANY MEANS, ELECTRONIC
`OR MECHANICAL, INCLUDING PHOTOCOPY, RECORDING, OR ANY
`INFORMATION STORAGE AND RETRIEVAL SYSTEM, WITHOUT
`PERMISSION IN WRITING FROM THE PUBLISHER.
`
`ACADEMICPRESS, INC.
`111 Fifth Avenue, New York, New York 10003
`
`United Kingdom Editionpublished by
`ACADEMICPRESS, INC. (LONDON) LTD.
`24/28 Oval Road, London NWi 7DX
`
`ISBN:0-12-673660-X
`Library of Congress Catalog Card Number: 79-53993
`PRINTED IN THE UNITED STATES OF AMERICA
`
`Page 3 of 180
`
`Page 3 of 180
`
`

`

`CONTENTS
`
`Preface
`
`1 GAUSSIAN ELIMINATION
`
`Introduction
`1.1
`1. 2 An Example of Gaussian Elimination
`1.3 Matrix Notation and Matrix Multiplication
`1.4 Gaussian Elimination= Triangular Factorization
`1.5 Row Exchanges, Inverses, and Transposes
`1. 6 Band Matrices, Applications, and Roundoff
`Review Exercises
`
`2 THE THEORY OF SIMULTANEOUS
`LINEAR EQUATIONS
`2 .1 Vector Spaces and Subspaces
`2.2 Tbe Solntion ofm Equations inn Unknowns
`2.3 Linear Independence, Basis, and Dimension
`2.4 Tbe Four Fundamental Subspaces
`2.5 Orthogonality of Vectors and Subspaces
`2. 6 Pairs of Subspaces and Products of Matrices
`Review Exercises
`
`3.1
`3.2
`3.3
`
`3 ORTHOGONAL PROJECTIONS AND LEAST SQUARES
`Inner Products and Projections onto Lines
`Projections onto Subspaces and Least Squares Approximations
`Orthogonal Bases, Orthogonal Matrices, and Gram- Schmidt
`Orthogonalization
`
`ix
`
`1
`2
`6
`19
`26
`37
`45
`
`48
`55
`62
`71
`80
`92
`100
`
`103
`Ill
`
`121
`
`Page 4 of 180
`
`

`

`vi
`
`Contents
`
`3.4 The Pseudoinverse and the Singular Value Decomposition
`3.5 WeightedLeastSquares
`Review Exercises
`
`4 DETERMINANTS
`
`Introduction
`4.1
`4.2 The Properties of the Determinant
`4. 3 Formulas for the Determinant
`4.4 Applications of Determinants
`Review Exercises
`
`5 EIGENVALUES AND EIGENVECTORS
`5 .1
`Introduction
`5. 2 The Diagonal Form of a Matrix
`5. 3 Difference Equations and the Powers A k
`5 .4 Differentia!Equations and the Exponential eAt
`5 .5 The Complex Case: Hermitian and Unitary Matrices
`5. 6 Similarity Transformations and Triangular Forms
`Review Exercises
`
`6 POSITIVE DEFINITE MATRICES
`6.1 Minima, Maxima, and Saddle Points
`6.2 Tests for Positive Definiteness
`6. 3 Semidefinite and Indefinite Matrices; Ax = >..Bx
`6.4 Minimum Principles and Rayleigh's Quotient
`6.5 The Rayleigh- Ritz Principle and Finite Elements
`
`7 COMPUTATIONS WITH MATRICES
`
`Introduction
`7. I
`7 .2 The Norm and Condition Number of a Matrix
`7 .3 The Computation of Eigenvalues
`Iterative Methods for Ax = b
`7.4
`
`8 LINEAR PROGRAMMING AND GAME THEORY
`8 .1 Linear Inequalities
`8.2 The Simplex Method
`8.3 The Theory of Duality
`8.4 NetworkModels
`8. 5 Game Theory and the Minimax Theorem
`
`137
`145
`150
`
`153
`155
`161
`169
`177
`
`179
`190
`195
`205
`217
`231
`240
`
`243
`249
`256
`264
`273
`
`279
`280
`287
`297
`
`305
`311
`323
`334
`339
`
`Page 5 of 180
`
`

`

`APPENDIX A LINEAR TRANSFORMATIONS, MATRICES,
`AND CHANGE OF BASIS
`
`APPENDIX B THE JORDAN FORM
`
`Contents
`
`vii
`
`348
`
`359
`
`APPENDIX C COMPUTER CODES FOR LINEAR ALGEBRA
`
`367
`
`References
`
`Solutions to Exercises
`
`Index
`
`3 77
`
`379
`
`409
`
`Page 6 of 180
`
`

`

`PREFACE
`
`!
`!
`i
`j
`I
`131:
`
`I believe that the teaching of linear algebra has become too abstract. This is a
`sweeping judgment, and perhaps it is too sweeping to be true. But I feel certain that a
`text can explain the essentials of linear algebra, and develop the ability to reason
`mathematically, without ignoring the fact that this subject is as useful and central and
`applicable as calculus. It has a simplicity which is too valuable to be sacrificed.
`Of course there are good reasons· for the present state of courses in linear algebra:
`The subject is an excellent introduction to the precision of a mathematical argument,
`and to the construction of proofs. These virtues I recognize and accept (and hope to
`preserve); I _enjoyed teaching in exactly this way. Nevertheless, once I began to
`experiment with alternatives at M.I.T., another virtue became equally important:
`Linear algebra allows and even encourages a very satisfying combination of both
`elements of mathematics-abstraction and application.
`As it is, too many students struggle with the abstraction and never get to see the
`application. And too many others, especially those who are outside mathematics de(cid:173)
`partments, never take the course. Even our most successful students tend to become
`adept at abstraction, but inept at ~ny calculation~solving linear equations by Cramer's
`rule, for example, or understanding eigenvalues only as roots of the. characteristic
`equation. There is a growing desire to make our teaching more useful than that, and
`more open.
`I hope to treat linear algebra in a way which makes sense to a wide variety of
`students at all levels. This does not imply that the real mathematics is absent; the
`subject deserves better than that. It does imply less concentration on rigor for its own
`sake, and more on understanding-we try to explain rather than to deduce. Some
`definitions are formal, but others are allowed to come to the surface in the middle of a
`discussion. In the same way, some proofs are intended to be orderly and precise, but
`not all. In every case the underlying theory has to be there; it is the core of the subject,
`but it can be motivated and reinforced by examples.
`
`Page 7 of 180
`
`

`

`x
`
`Preface
`
`One specific difficulty in constructing the course is always present, and is hard to
`postpone: How should it start? Most students come to the first class already knowing
`something about linear equations. Nevertheless, I am convinced that linear algebra
`must begin with the fundamental problem of n equations in n unknowns, and that it
`must teach the simplest and most useful method of solution-Gaussian elimination (not
`determinants!). Fortunately, even though this method is simple, there are a number of
`insights that are central to its understanding and new to almost every student. The most
`important is the equivalence between elimination and matrix factorization; the
`coefficient matrix is transformed into a product of triangular matrices. This provides a
`perfect introduction to matrix notation and matrix multiplication.
`The other difficulty is to find the right speed. If matrix calculations are already
`familiar, then Chapter 1 must not be too slow; the next chapter is the one which
`demands hard work. Its goal is a genuine understanding, deeper than elimination can
`give, of the equation Ax = b. I believe that the introduction of four fundamental
`subspaces-the column space of A; the row space; and their orthogonal complements,
`the two nullspaces-is an effective way to generate examples of linear dependence and
`independence, and to illustrate the ideas of basis and dimension and rank. The ortho(cid:173)
`gonality is also a natural extension to n dimensions of the familiar geometry of
`three-dimensional space. And of course those four subspaces are the key to Ax = b.
`Chapters 1- 5 are really the heart of a course in linear algebra. They contain .a large
`nllmber of applications to physics, engineering, probability and statistics, economics,
`and biology. (There is also the geometry of a methane molecule, and even an outline of
`factor analysis in psychology, which is the one application that my colleagues at
`M.I.T. refuse to teach!) At the same time, you will recognize that this text can certainly
`not explain every possible application of matrices. It is simply a first course in linear
`algebra. Our goal is not to develop all the applications, but to prepare for them-and
`that preparation can only come by understanding the theory.
`This theory is well established. After the vector spaces of Chapter 2, we study
`projections and inner products in Chapter 3, determinants in Chapter 4, and eigen(cid:173)
`values in Chapter 5. I hope that engineers and others will look especially at Chapter 5,
`where we concentrate on the uses of diagonalization (including the spectral theorem)
`and save the Jordan form for an appendix. Each of these chapters is followed by an
`extra set of review exercises. In my own teaching I have regarded the following
`sections as optional: 3.4-3.5, 6.4-6.5, 7.1-7.4, and most of 1.6 and 2.6. I use the
`section on linear transformations in a flexible way, as a source of examples that go
`outside R• and of a complementary approach to the theory; itilluminates in a new way
`what has been concretely understood. And I believe that even a brief look at Chapter 8
`allows a worthwhile but relaxed introduction to linear programming and game
`theory-maybe my class is happier because it comes at the very end, without examina(cid:173)
`tion.
`With this edition there is also a new Manual which I hope instructors will request
`from the publisheL It is a collection of ideas about the teaching of applied linear
`algebra, arranged section by section, and I very much hope it will grow; all suggestions
`and problems will be gratefully received (and promptly included). It also gives
`
`Page 8 of 180
`
`

`

`Preface
`
`xi
`
`solutions to the review exercises, which now range from direct questions on the text to
`my favorite about the connectedness of the matrices with positive determinant.
`I should like to ask one favor of the mathematician who simply wants to teach basic
`linear algebra. That is the true purpose of the book, and I hope he will not be put off by
`the "operation counts," and the other remarks about numerical computation, which
`arise especially in Chapter 1. From a practical viewpoint these comments are obviously
`important. Also from a theoretical viewpoint they have a serious purpose-to reinforce
`a detailed grasp of the elimination sequence, by actually counting the steps. In this
`edition there is also a new appendix on computer subroutines, including a full code for
`solving Ax = b. I hope that students will have a chance to experiment with it. But there
`is no need to discuss this or any other computer-oriented topic in class; any text ought
`to supplement as well as summarize the lectures.
`In short, a book is needed that will permit the applications to be taught success(cid:173)
`fully, in combination with the underlying mathematics. That is the book I have tried
`to write.
`Many readers have sent ideas and encouragement for this second edition, and I am
`tremendously grateful. The result is a better introduction to vector spaces; a large
`number of new exercises, and hundreds of changes and improvements and corrections.
`Nevertheless, the spirit of the book remains the same. My hope is to help construct a
`course with a real purpose. That is intangible but it underlies the whole book, and so
`does the support I have been given by my family; they are more precious than I can say.
`Beyond this there is an earlier debt, which I can never fully repay. It is to my parents,
`and I now dedicate the book to them, hoping that they will understand how much they
`gave to it: Thank you both.
`
`GILBERT STRANG
`
`Page 9 of 180
`
`

`

`LINEAR ALGEBRA
`AND ITS
`APPLICATIONS
`SECOND EDITION
`
`Page 10 of 180
`
`

`

`1
`
`GAUSSIAN
`ELIMINATION
`
`INTRODUCTION • 1.1
`
`The central problem of linear algebra is the solution of simultaneous linear equations.
`The most important case, and the simplest, is when the number ofunknowns equals the
`number of equations. Therefore we begin with this problem: n equations in n un(cid:173)
`knowns.
`Two ways of solving simultaneous equations are proposed, almost in a so~ of
`competition, from high school texts on. The first is the method of elimination: Multi(cid:173)
`ples of the first equation in the system are subtracted from the other equations, in such a
`way as to remove the first unknown from those equations. This leaves a smaller
`system, of n - 1 equations in n - 1 unknowns. The process is repeated over and over
`until there remains only one equation and one unknown, which can be solved im(cid:173)
`mediately. Then it is not hard to go backward, and find all the other unknowns in
`reverse order; we shall work out an example in a moment. A second and more sophisti(cid:173)
`cated way introduces the idea of determinants. There is an exact formula, called
`Cramer's rule, which gives the solution (the correct values of the unknowns) as a ratio
`of two n by n determinants. It is not always obvious from the examples that are worked
`in a textbook (n = 3 or n = 4 is about the upper limit on the patience of a reasonable
`human.being) which way is better.
`In fact, the mofe sophisticated form~la involving determinants is a disaster, and
`elimination is the algorithm that is constantly used to solve large systems of simultane(cid:173)
`ous equations. Our first goal is to understand this algorithm. It is generally called
`Gaussian elimination.
`The algorithm is deceptively simple, and in some form it may already be familiar to
`
`Page 11 of 180
`
`

`

`-
`
`2
`
`1 Gaussian Elimination
`
`the reader. But there are four aspects that lie deeper than the simple mechauics of
`elimination, and which -together with the algorithm itself-we want to explain in this
`chapter. They are:
`
`(I) The interpretation of the elimination method as a factorization of the coefficient
`matrix. We shall introduce matrix notation for the system of simultaneous equations·,
`writing the n unknowns as a vector x and the n equations in the matrix shorthand Ax =
`b. Then elimination amounts to factoring A into a product LU of a lower triangular
`matrix Land an upper triangular matrix U. This is a basic and very useful observation.
`Of course, we have to introduce matrices and vectors.in a s)'stematic way, as well as
`the rules for their multiplication. We also define the transpose AT and the inverseA-1 of
`amatrixA.
`In most cases the elimination method works without any difficulties or mod,
`(2)
`ifications. In some exceptional cases it Qreaks down-either because the equations
`were originally written in the wrong order, which is easily fixed by exchanging them,
`or else because the equations Ax = b fail to have a unique solution. In the latter case
`there may be no solution, or infiuitely many. We want to understand how, at the time
`of breakdown, the elimination process identifies each of these possibilities.
`(3)
`It is essential to have a rough count of the number of arithmetic operations
`required to solve a system by elimination. In many practical problems the decision of
`how many unknowns to introduce-balancing extra accuracy in a mathematical model
`against extra expense in computing -is governed by this operation count.
`(4) · We also want to see, intuitively, how sensitive to roundoff error the solutionx
`might be. Some problems are sensitive; others are not. Once the source of difficulty
`becomes clear, it is ·eaSy to guess how to try to control it. Without control, a computer
`could carry out millions of operations, rounding each result to a fixed number of digits,
`and produce a totally useless ''solution.''
`
`The final result of this chapter will be an elimination algorithm which is about as
`efficient as possible. It is essentially the algorithm that is. in constant use in a tremen(cid:173)
`dous variety of applications. And at the same time, understanding it in terms of
`matrices - the coefficient matrix, the matrices that carry out an elimination step ·or an
`exchange of rows, and the final triangular factors L and U -is an essential foundation
`for the theory.
`
`1.2 • AN EXAMPLE OF GAUSSIAN ELIMINATION
`
`The way to understand this subject is by example. We begin in three dimensions with
`the system
`
`2u + v +w = 1
`= -2
`4u + V
`-2u+2v+w= 7.
`
`(1)
`
`The problem is to find the unknown values ofu, v, and w, and we shall apply Gaussian
`
`Page 12 of 180
`
`

`

`1.2 An Example of Gaussian Elimination
`
`3
`
`elimination. (Gauss is recognized as the greatest of all mathematicians, but certainly
`not because of this invention, which probably took him ten minutes. Ironically, how(cid:173)
`ever, it is the most frequently used of all the ideas that bear his name.) The method
`starts by subtracting multiples of the first equation from the others, so as to eliminate u
`from the last two equations. This requires that we
`
`(a)
`(b)
`
`subtract 2 times the first equation from the second;
`subtract -1 times the first equation from the third.
`
`The result is an equivalent system of equations
`
`I
`2u+v+w=
`-lv-2w=-4
`3v + 2w = 8.
`
`(2)
`
`The coefficient 2, which multiplied the first unknown u in the first equation, is know.n
`as the pivot in this first elimination step.
`At the second stage of elimination, we ignore the first equation. The other two
`,equations involve only the two unknOwns v and w, and the same elimination procedure
`can be applied to them. The pivot for this stage is -1, and a multiple of this second
`equation will be subtracted from the remaining equations (in this case there is only the
`third one remaining) so as to eliminate v. We add 3 times the second equation to
`the third or, in other words, we
`
`(c)
`
`subtract -3 times the second equation from the third.
`
`The .elimination process is now complete, at least in the "forward" direction, and
`leaves the simplified system
`
`2u + v + w = 1
`lv -2w = -4
`-
`- 4w = -4.
`
`(3)
`
`There is an obvious order in which to solve this system. The last equation gives w = 1;
`substituting into the second equation, we find v = 2; then the first equation gives
`u = -1. This simple process is called back-substitution .
`It is easy to understand how the elimination idea can be extended ton equations inn
`unknowns, no matter how large the system may be. At the first stage, we use multiples
`of the first equation to annihilate all coefficients below the first pivot. Next, the second
`column is cleared out below the second pivot; and so on. Finally, the last equation
`contains only the la:st unknown. Back-substitution yields the answer in the opposite
`order, beginning with the last unknown, then solving for the next to last, and eventu(cid:173)
`ally for the first.
`
`~I
`
`Page 13 of 180
`
`

`

`i
`I
`I
`
`1
`
`I
`I
`
`4
`
`1 Gaussian Etimination
`
`EXERCISE 1.2.1 Apply elimination and back-substitution to solve
`= 3
`2u - 3v
`4u - 5v + w = 7
`2u- v-3w=5.
`
`What are the pivots? List the three operations in which a multiple of one row is subtracted from
`another.
`
`EXERCISE 1.2.2 Solve the system
`
`V
`
`= 0
`2u -
`-u +2v
`0
`w
`V + 2w -
`::::: 0
`w + 2z = 5.
`
`-
`
`Z
`
`We want to ask two questions. They may seem a little premature-after all, we have
`barely got the algorithm working-but their answers will shed more light on the
`method itself. The first question is whether this elimination procedure always1eads to
`the solution. Under what circumstances could.the process break down? The answer
`is: If none of the pivots are zero, there is only one solution to the problem and it is
`found by forward elimination and back-substitution. But if any of the pivots happens to
`be zero, the elimination technique has to stop, either temporarily or permanently.
`If the first pivot were zero, for example, the elimination of u ftum the other equa(cid:173)
`tions would be impossible. The same is true at every intermediate stage. Notice that an
`inteI'mediate pivot may become zero during the elimination process (as in Exercise
`1.2.3 below) even though in the original system the coefficient in that place was not
`zero. Roughly speaking, we do not know whether the pivots are nonzero until we try,
`by actually going through the elimination process.·
`In most cases this problem of a zero pivot can be cured, -and elimination can proceed
`to find the unique solution to the problem. In other cases, a breakdown is unavoidable
`since th~ equations have either no solution or in'finitely many.
`For the present, we trust all the piyots to be nonzero.
`The second question is very practical, in fact it is financial. How many sepamte
`arithmetical- operations does elifnination require for a system of n equations ln n
`unknowns? If n is large, a computer is going to take ou; place in carrying out the
`elimination (you may have such a program available, or you could use the Fortran
`codes in Appendix C). Since all the steps are known, we should be able to predict the
`number of operations a computer will take. -For the moment, we ignore the right-hand
`sides of the equations, and count only the operations on the left. These operations are
`of two kinds. One is a division by the piv'ot in order to find out what multiple (say/) of
`the pivotal equation is to be subtracted from an equation below it. Then when we
`actually do this subtraction of one equation from another, we continually meet a
`"multiply-subtract" combination; the terms in.the pivotal equation are multiplied by l,
`and then subtracted from the equation beneath it.
`Suppose we agree to call each division, and each multiplication-subtraction, a single
`
`'
`
`J
`
`7
`
`Page 14 of 180
`
`

`

`1.2 An Example of G8.l!ssian Elimination
`
`5
`
`operation. At the beginning, when the first equation has length n, it takes n operations
`for every zero we achieve in the first column -one to find the multiple I, and the others
`to find the new entries along the row. There are n - 1 rows underneath the first one, so
`I) = n2 - n operations. (Another approach to
`the first stage of elimination needs n(n -
`n2
`- n is this: All n 2 entries need to be changed, except then in the first row.) Now
`notice that later stages are faster because the equations are becoming progressively
`shorter; when the elimination is down to k eqUations, only k2 - k operatioils are needed
`to clear out the column below the pivot-by the same reasoning that applied to the first
`stage, when k equaled n. Altogether, the total number of operations on the left side of
`the equations is
`p ={I•+ ... +n•)-(I + ... +n) = n(n + I)(Zn +I)'- n(n + 1) = n 3 -,-.n
`6
`- 2
`3
`If n is at all large,a good estimate for the number of operations is P = n3/3.
`Back-substitution· is considerably faster. The last unknown is found in one operation
`(a division by the last pivot), the second to last unknown requires two, and so on, The
`total for back -substitution is
`
`.
`
`Q =l+Z+···+n=
`-
`
`n(n + 1)
`·2
`
`n2
`= - ·
`2
`-
`
`A few years ago, almost every mathematician would have guessed that these num(cid:173)
`bers were essentially optimal, in,other words that a general .system of order n could not
`be solved with much fewer than n3 /3 multiplications. (There were even theorems to
`demonstrate it, but they did not allow for all possible methods.) Astonishingly, that
`guess has been proved Wrong, and there now exists a method that requires only Cn10gz 7
`operations! Fortunately for elimination, the constant C is by corriparison so large, and
`so i.nany more additions are required, and the computer programming i~ so awkward,
`that the new method is largely of theoretical interest. It seems to be completely
`unknown whether the exponent can be made any sma_ller, t
`EXERCISE 1.2.3 Apply elimination to the system
`
`U + V +w = -2
`3u + 3v -w = 6
`u- v+w=-1.
`
`When a zero pivot arises, exchange that equation for the one bel~~ it, and proceed. What
`coefficient of v in the third equation, in place of the present -1,_ would make it impossible to
`proceed-and force the elimination method to break down?
`
`EXERCISE 1.2.4 Solve by elimination the system of two equations
`
`x- y= 0
`3x + 6y = 18.
`
`t Added in the second edition: The exponent is corning down, thanks to ViCtor Pan at IBM
`and others. Itis still above 2.5.
`
`Page 15 of 180
`
`

`

`'I'
`i11
`...
`
`i'
`I
`
`{i·
`
`6
`
`1 Gaussian Elimination
`
`Draw a graph representing each equation as a straight line in the·x-y plane; the lines inters.ect at
`the solution. Also, add one more line-the graph Of the new second equation which arises after
`elimination.
`
`EXERCISE 1.2.5 With reasonable assumptions on computer speed and cost, how large a sys(cid:173)
`tem can be solved for $1, and for $1000? Use n3/3 as the operatio_n count, and you might pay
`$1000 an hour for a computer that could average a mi1lion operationi, a second.
`
`EXERCISE 1.2.6 (very optional) Normally the multiplication of two Gomplex numbers
`(a + ib) (c +id)= (ac -bd) + i(bc + ad)
`
`involves the four separate multiplications ac, bd, be, ad. Ignoring i, can you compute the(cid:173)
`quantities ac - bd and br; + ad with only three multiplications? (You may do additions, such as
`fonning a + b before multiplying, without any penalty.)
`
`EXERCISE 1;2.7 Use eliqiination to solve -
`
`u+v+w=6
`u + 2v + 2w = 11
`2u + 3v' - 4w = 3.
`
`To get some experience insetting up linear equations, suppose that
`
`(a) of those who start a year in California, SO percent stay in and 20percent move out;
`(b) of those who start a year outside California, 90 percent stay out and 10 percent move in.
`
`If we know the situation aHhe beginning, say 200 million outside and 30 million in, then it is
`easy to find the numbers u and v that are outside and inside at the end:
`, 9 (200,000,000) + ,2 (30,000,000) = u
`.I (200,000,000) + ,8(30,000,000) = V
`
`The real problem is to go backwards, and compute the start from the finish.
`
`EXERCISE 1.2.8 If u = 200 million and v = 30 million at the end, set up (without solving) the
`equations to find u and v at the beginning.
`
`EXERCISE 1.2.9 If u and v at the end are the same as u and v at the beginning, what equations
`do you get? What is the ratio of u to v in this ':steady state''?
`
`1.3 • MATRIX NOTATION AND MATRIX MULTIPLICATION
`
`So far, with our 3 by 3 example, we have been able to write out all the equations in full.
`We could even list in detail each of the elimination steps, subtracting a multiple of one
`row from another, which puts the system of equations into a simpler form. For a large
`system, however, this way of keeping track of the elimination would be hopeless; a
`much more concise record is needed. We shall now introduce matrix notation to
`
`Page 16 of 180
`
`

`

`1.3 Matr,ix Notation and Matrix Multipffcatlon
`
`7
`
`· describe the original system of equations, and matrix multiplication to describe the
`operations that make it simplec
`Notice that in our example
`
`2u + V + W = . ]
`4u + V
`= -2
`-2u+2v+w;= 7
`
`'three different types of quantities appear. There are the unknowns u, v, w; there are the.
`· right sides 1, -2, 7; and finally, there is a set of nine numerical coefficients on the 1eft
`side (one of which happens to be zero). For the. column of nl)mbers on the right ·
`side -the inhomogeneous terms in the equations -we introduce the vector notation
`
`This is a three-dimensional column vector. It is represented geometricaHy in Fig. I. I,
`where the three components 1, -2, and 7 are the coordinates of a point in three(cid:173)
`dimensional space. Any vector b can be identified in this. way with a point in space;
`thereis a perfect match between points and vectors. t
`
`[~]
`
`Fig. 1.1. A vector in three-dimensional space.
`
`The basic operations are the addition of two such vectors and the multiplication of a
`vector by a scalar. Geometrically, 2b is a vector in the same direction as b but twice as
`
`t Some authors prefer to say that the arrow is really the vector, but I think it doesn't matter;
`you can choose the arrow, the point, or the three numbers. (Note that the arrow starts at the
`origin.) In six dimensions itis probably easiest to choose the six numbers.
`
`Page 17 of 180
`
`

`

`8
`
`1 Gaussian Elimination-
`
`long, and -2b goes in the opposite direction:
`2b=[-!J.
`-2b=[-!J.
`
`14
`
`-14
`
`Addition of vectors is also carried out on each component separately; in Fig. 1.1 we
`
`have
`
`This example Seems special (the vectors are in the coordinate directions), so we give a
`more typical addition in Fig. 1.2. Once again, the addition is done a component at a
`
`time,
`
`and geometrically this produces the famous parallelogram.
`
`b+,-[fl
`,-[~]
`
`' ' '
`
`' "-b ' ' ' ' ' ,-2b
`
`Fig. 1.2. Vector addition and scalar multiplication.
`
`Two vectors can be added only if they have the same dimension, that is, the same
`
`Page 18 of 180
`
`

`

`t .3 Matrix Notation and Matrix Multiplication
`
`9
`
`number of components. (Note that c is three-dimensional, even though its first compo(cid:173)
`nent is zero.)
`The three unknowns in the eq.uation are also represented by a vector:
`
`the unknown is x = [ : }
`
`' [-1]
`
`~
`

`
`the solution is x =
`
`The Coefficient Matrix
`
`We go back to the nine coefficients on the left side of our equations. They fall into
`three mws and three colunms, which means that they form a three by three matrix,
`called the coefficient matrix:
`
`A=[_~ i ~ l
`
`A is a square matrix, because the number of equations agrees with the number of
`unknowns. More generally, if there are n equations inn unknowns, we have. a square
`coefficient matrix of order n, with n rows and n columns; or, still more generally, we
`might have m equations and n unknowns. In ~his case the.matrix is rectangular, with m
`rows and n columns. In other words, it will be an ''m by n matrix.''
`Matrices are added to each other, or multiplied by numerical constants, exactly as
`vectors are-one component at a time. In fact we may regal'd vectors as spec:ial cases
`of matrices; they are matrices with only one column. As before, two matrices can be
`added only if they have.the same shape:
`
`[! ~] + [-~ ~ ] = [ ~ ~ ] '
`
`04
`
`12
`
`16
`
`2 [ ~ ~ ·] = [ : ~] .
`
`04
`
`0·8.
`
`Multiplication of a Matrix and a Vector
`
`Now we put this notation to use. We propose to rewrite the system (1) of three
`equations in three unknowns in the simplified matrix form Ax = b. Written out in full,
`this form is
`
`[ _~ l ~] [: ] = [-i l
`
`Page 19 of 180
`
`

`

`10
`
`1 Gaussian E!iminatton
`
`The right side is clear enough; it is the column vector of inhomogeneous terms. The left
`side consists of the vector x, premultiplied by the matrix A. 'Obviously, this multiplica(cid:173)
`tion will be defined exactly so as to reproduce the original system (I). Therefore, the
`first component of the product Ax must come from "multiplying" the first row of A
`into the column vector x:
`
`[2 1
`
`l] [ : ] =[2u+v+w].
`
`(4)
`
`This equals the first component of b; 2u + v + w = 1 is the first equation in our
`system. The second component of the product Ax is determined by the second row of
`A-it is 4u .+ v-and the.third component -2u + 2v + w comes from the third row.
`Thus the matrix equation Ax = b is precisely equivalent to the three simultaneous
`equations with which we started.
`The operation in Eq. (4) is fundamental to all matrix multiplications. It starts with a
`row vector and a column vector of matching lengths, and it produces a single number.
`This single quantity is called the inner product of the two vectors. In other words, the
`product of a 1 by n matrix, which is a row vector, and an n by 1 matrix, alias a column
`vector, is a 1 by 1 matrix:
`
`[2 4 5] [ ~] = [2·3 + 4·1 + 5·0] = [10].
`
`If we multiply a whole matrix and a vector, there will be one of these inner products
`for every row. Suppose we stay with a 3 by 3 example:

`
`2 4 5] [ 3] [2·3+4·1+5·0]
`Ax = 2 6 8
`.
`1
`=
`2· 3 + 6· l + 8·0 =
`[
`1 0 9
`0
`1·3+0·1+9·0
`
`[ 10]
`12
`3
`
`(5)
`
`This calculation reveals something important: The product Ax is actually a combina(cid:173)
`tion of the three columns of A. In other words, Ax can be found all at once:
`
`(6)
`
`This rule will be used over and over throughout the book, therefore we repeat it for
`emphasis: The product Ax can be fouud from the individual entries as in (5), or it can
`be computed by using whole columns as in (6). Ax is a combination of the columns of
`A, and each column is weighted by a component ofx.
`
`,;
`
`'i
`
`j: >
`!I
`I,
`ii
`11'
`
`Page 20 of 180
`
`

`

`1.3 Matrix Notation and Matrix Multiplication
`
`11
`
`EXERCISE 1.3.1 Compute the products
`
`[; I ; ] [ ; ] and
`
`[ ~ :
`
`;
`
`] [-~ ]
`
`and
`
`[ ~ ~ ] [: l
`
`Draw a pair of perpendicular axes and mark off the vectors to the points x = 2, y = 1 and
`x = 0, y = 3. Add the two vectors by completing the parallelogram.
`
`EXERCISE 1.3.2 Working a column at a time, compute the products
`
`EXERCISE 1.3.3 Find the inner products
`
`[1
`
`and
`
`[I
`
`The first answer gives the length of the vector (squared)-it is the square of the hypotenuse in
`three-dimensional space.
`
`EXERCISE 1.3.4 ff an m by n matrix A multiplies an n-dimensi

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket