Find Bases That Gives Upper Triangular Matrix Example
U
Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015
Manipulation
- ■
-
Exploring upper-triangular matrices
Manipulate [MatrixForm [UpperTriangularize [{{1, 2 a, 3}, {4, 5, 6 b}, {7, 8c, 9}}]], {a, − 2, 2, 1}, {b, − 3, 3, 1}, {c, − 5, 5, 1}]
We use Manipulate, MatrixForm, and UpperTriangularize to construct and explore upper-triangular matrices. If we let a = 1, b = 2, and c = 3, then the UpperTriangualize function converts the matrix
MatrixForm[{{1, 2, 3}, {4, 5, 12}, {7, 24, 9}}]
to the upper-triangular matrix
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012409520550028X
Determinants and Eigenvalues
Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fourth Edition), 2010
Highlights
- ■
-
The determinant of an upper (or lower) triangular matrix is the product of the main diagonal entries.
- ■
-
A row operation of type (I) involving multiplication by c multiplies the determinant by c.
- ■
-
A row operation of type (II) has no effect on the determinant.
- ■
-
A row operation of type (III) negates the determinant.
- ■
-
If an n × n matrix A is multiplied by c to produce B, then |B| = cn |A|.
- ■
-
The determinant of a matrix can be found by row reducing the matrix to upper triangular form and keeping track of the row operations performed and their effects on the determinant.
- ■
-
An n × n matrix A is nonsingular iff |A| ≠ 0 iff rank(A) = n.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123747518000226
Algorithms
William Ford , in Numerical Linear Algebra with Applications, 2015
9.3 The Solution to Upper and Lower Triangular Systems
This section presents algorithms for solving upper- and lower-triangular systems of equations. In addition to providing additional algorithms for study, we will need to use both these algorithms throughout this book.
An upper-triangular matrix is an n × n matrix whose only nonzero entries are below the main diagonal; in other words
If U is an n × n upper-triangular matrix, we know how to solve the linear system Ux = b using back substitution. In fact, this is the final step in the Gaussian elimination algorithm that we discussed in Chapter 2. Compute the value of xn = bn/unn , and then insert this value into equation (n − 1) to solve for x n − 1. Continue until you have found x 1. Algorithm 9.4 presents back substitution in pseudocode.
Algorithm 9.4
Solving an Upper Triangular System
function BACKSOLVE(U,b)
% Find the solution to Ux = b, where U is an n × n upper-triangular matrix.
xn = bn/unn
for i = n-1:-1:1 do
sum = 0.0
for j = i+1:n do
sum = sum + uijxj
end for
x (i) = (b (i) − sum) /uii
end for
return x
end function
NLALIB: The function backsolve implements Algorithm 9.4.
A lower-triangular matrix is a matrix all of whose elements above the main diagonal are 0; in other words
A lower-triangular system is one with a lower-triangular coefficient matrix.
The solution to a lower-triangular system is just the reverse of the algorithm for solving an upper-triangular system—use forward substitution. Solve the first equation for , and insert this value into the second equation to find x 2, and so forth.
Example 9.7
Solve
SOLUTION:
Algorithm 9.5
Solving a Lower Triangular System
function FORSOLVE(L,b)
% Find the solution to the system Lx = b, where L is an n × n lower-triangular matrix.
x 1 = b 1/l 11
for i = 2:n do
sum = 0.0
for j = 1:i-1 do
sum = sum + lijxj
end for
x (i) = (b (i) − sum) /lii
end for
return x
end function
NLALIB: The function forsolve implements Algorithm 9.5.
Example 9.8
Solve the systems
and
>> U = [1 -1 3;0 2 9;0 0 1];
>> L = [1 0 0;-1 2 0;3 4 5];
>> b = [1 9 -2]';
>> x = backsolve(U,b)
x =
20.5000
13.5000
-2.0000
>> U\b
ans =
20.5000
13.5000
-2.0000
>> y = forsolve(L,b)
y =
1
5
-5
>> L\b
ans =
1
5
-5
9.3.1 Efficiency Analysis
Algorithm 9.4 executes 1 division and then begins an outer loop having n − 1 iterations. The inner loop executes n − (i + 1) + 1 = n − i times, and each loop iteration performs 1 addition and 1 multiplication, for a total of 2 (n − i) flops. After the inner loop finishes, 1 subtraction and 1 division execute. The total number of flops required is
Thus, back substitution is an O (n 2)(quadratic) algorithm. It is left as an exercise to show that Algorithm 9.5 has exactly the same flop count.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123944351000090
STABILITY, INERTIA, AND ROBUST STABILITY
BISWA NATH DATTA , in Numerical Methods for Linear Control Systems, 2004
Computing the Inertia of a Symmetric Matrix
If A is symmetric, then Sylvester's law of inertia provides an inexpensive and numerically effective method for computing its inertia.
A symmetric matrix A admits a triangular factorization:
where U is a product of elementary unit upper triangular and permutation matrices, and D is a symmetric block diagonal with blocks of order 1 or 2. This is known as diagonal pivoting factorization. Thus, by Sylvester's law of inertia In(A) = In(D)). Once this diagonal pivoting factorization is obtained, the inertia of the symmetric matrix A can be obtained from the entries of D as follows:
Let D have p blocks of order 1 and q blocks of order 2, with p + 2q = n. Assume that none of the 2 × 2 blocks of D is singular. Suppose that out of p blocks of order 1, p′ of them are positive, p″ of them are negative, and p″ of them are zero (i.e., p′ + p″ + p″ = p). Then,
The diagonal pivoting factorization can be achieved in a numerically stable way. It requires only n 3/3 flops. For details of the diagonal pivoting factorization, see Bunch (1971), Bunch and Parlett (1971), and Bunch and Kaufman (1977).
LAPACK implementation: The diagonal pivoting method has been implemented in the LAPACK routine SSYTRF.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780122035906500112
The Inverse
Richard Bronson , Gabriel B. Costa , in Matrix Methods (Third Edition), 2009
3.2 Calculating Inverses
In Section 2.3, we developed a method for transforming any matrix into row-reduced form using elementary row operations. If we now restrict our attention to square matrices, we may say that the resulting row-reduced matrices are upper triangular matrices having either a unity or zero element in each entry on the main diagonal. This provides a simple test for determining which matrices have inverses.
Theorem 1 A square matrix has an inverse if and only if reduction to row-reduced form by elementary row operations results in a matrix having all unity elements on the main diagonal.
We shall prove this theorem in the Final Comments to this chapter as
Theorem 2 An n × n matrix has an inverse if and only if it has rank n.
Theorem 1 not only provides a test for determining when a matrix is invertible, but it also suggests a technique for obtaining the inverse when it exists. Once a matrix has been transformed to a row-reduced matrix with unity elements on the main diagonal, it is a simple matter to reduce it still further to the identity matrix. This is done by applying elementary row operation (E3)–adding to one row of a matrix a scalar times another row of the same matrix–to each column of the matrix, beginning with the last column and moving sequentially toward the first column, placing zeros in all positions above the diagonal elements.
Example 1 Use elementary row operations to transform the upper triangular matrix
to the identity matrix.
Solution
To summarize, we now know that a square matrix A has an inverse if and only if it can be transformed into the identity matrix by elementary row operations. Moreover, it follows from the previous section that each elementary row operation is represented by an elementary matrix E that generates the row operation under the multiplication EA. Therefore, A has an inverse if and only if there exist a sequence of elementary matrices. E 1, E 2,…, E k such that
But, if we denote the product of these elementary matrices as B, we then have BA = I, which implies that B = A −1. That is, the inverse of a square matrix A of full rank is the product of those elementary matrices that reduce A to the identity matrix! Thus, to calculate the inverse of A, we need only keep a record of the elementary row operations, or equivalently the elementary matrices, that were used to reduce A to I. This is accomplished by simultaneously applying the same elementary row operations to both A and an identity matrix of the same order, because if
then
We have, therefore, the following procedure for calculating inverses when they exist. Let A be the n × n matrix we wish to invert. Place next to it another n × n matrix B which is initially the identity. Using elementary row operations on A, transform it into the identity. Each time an operation is performed on A, repeat the exact same operation on B. After A is transformed into the identity, the matrix obtained from transforming B will be A −1.
If A cannot be transformed into an indentity matrix, which is equivalent to saying that its row-reduced from contains at least one zero row, then A does not have an inverse.
Example 2 Invert
Solution
A has been transformed into row-reduced form with a main diagonal of only unity elements; it has an inverse. Continuing with transformation process, we get
Thus,
Example 3 Find the inverse of
Solution
A has been transformed into row-reduced form with a main diagonal of only unity elements; it has an inverse. Continuing with the transformation process, we get
Thus,
Example 4 Find the inverse of
Solution
Thus,
Example 5 Invert
Solution
A has been transformed into row-reduced form. Since the main diagonal contains a zero element, here in the 2−2 position, the matrix A does not have an inverse. It is singular.
Problems 3.2
In Problems 1−20, find the inverses of the given matrices, if they exist.
- 1.
-
- 2.
-
- 3.
-
- 4.
-
- 5.
-
- 6.
-
- 7.
-
- 8.
-
- 9.
-
- 10.
-
- 11.
-
- 12.
-
- 13.
-
- 14.
-
- 15.
-
- 16.
-
- 17.
-
- 18.
-
- 19.
-
- 20.
-
- 21.
-
Use the results of Problems 11 and 20 to deduce a theorem involving inverses of lower triangular matrices.
- 22.
-
Use the results of Problems 12 and 19 to deduce a theorem involving the inverses of upper triangular matrices.
- 23.
-
Matrix inversion can be used to encode and decode sensitive messages for transmission. Initially, each letter in the alphabet is assigned a unique positive integer, with the simplest correspondence being
Zeros are used to separate words. Thus, the message
is encoded
This scheme is too easy to decipher, however, so a scrambling effect is added prior to transmission. One scheme is to package the coded string as a set of 2-tuples, multiply each 2-tuple by a 2 × 2 invertible matrix, and then transmit the new string. For example, using the matrix
the coded message above would be scrambled into
and the scrambled message becomes
Note an immediate benefit from the scrambling: the letter S, which was originally always coded as 19 in each of its three occurrences, is now coded as a 35 the first time and as 75 the second time. Continue with the scrambling, and determine the final code for transmitting the above message.
- 24.
-
Scramble the message SHE IS A SEER using, matrix
- 25.
-
Scramble the message AARON IS A NAME using the matrix and steps described in Problem 23.
- 26.
-
Transmitted messages are unscrambled by again packaging the received message into 2-tuples and multiplying each vector by the inverse of A. To decode the scrambled message
using the encoding scheme described in Problem 23, we first calculate
and then
The unscrambled message is
which, according to the letter-integer correspondence given in Problem 23, translates to HELP. Using the same procedure, decode the scrambled message
- 27.
-
Use the decoding procedure described in Problem 26, but with the matrix A given in Problem 24, to decipher the transmitted message
- 28.
-
Scramble the message SHE IS A SEER by packaging the coded letters into 3-tuples and then multiplying by the 3 × 3 invertible matrix
Add as many zeros as necessary to the end of the message to generate complete 3-tuples.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080922256500097
SOME FUNDAMENTAL TOOLS AND CONCEPTS FROM NUMERICAL LINEAR ALGEBRA
BISWA NATH DATTA , in Numerical Methods for Linear Control Systems, 2004
LU Factorization from Gaussian Elimination with Partial Pivoting
Since the interchange of two rows of a matrix is equivalent to premultiplying the matrix by a permutation matrix, the matrix A (k) is related to A (k − 1) by the following relation:
where P k is the permutation matrix obtained by interchanging the rows k and r k of the identity matrix, and M k is an elementary lower triangular matrix resulting from the elimination process. So,
Setting M = M n-1 P n-1 M n-2 P n-2 … M 2 P 2 M 1 P 1, we have the following factorization of A:
The above factorization can be written in the form: PA = LU, where P = P n-1 P n-2 … P 2 P 1, U = A (n-1), and the matrix L is a unit lower triangular matrix formed out of the multipliers. For details, see Golub and Van Loan (1996, pp. 99).
For n = 4, the reduction of A to the upper triangular matrix U can be schematically described as follows:
- 1.
-
- 2.
-
- 3.
-
The only difference between L here and the matrix L from Gaussian elimination without pivoting is that the multipliers in the kth column are now permuted according to the permutation matrix .
Thus, to construct L, again no explicit products or matrix inversions are needed. We illustrate this below.
Consider the case n = 4, and suppose P 2 interchanges rows 2 and 3, and P 3 interchanges rows 3 and 4.
The matrix L is then given by:
Example 3.4.1.
k = 1
- 1.
-
The pivot entry is 7: r 1 = 3.
- 2.
-
Interchange rows 3 and 1.
- 3.
-
Form the multipliers: .
- 4.
-
.
k = 2
- 1.
-
The pivot entry is .
- 2.
-
Interchange rows 2 and 3.
- 3.
-
Form the multiplier:
Form
Verify. .
Flop-count. Gaussian elimination with partial pivoting requires only flops. Furthermore, the process with partial pivoting requires at most O(n 2) comparisons for identifying the pivots.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780122035906500070
LINEAR STATE-SPACE MODELS AND SOLUTIONS OF THE STATE EQUATIONS
BISWA NATH DATTA , in Numerical Methods for Linear Control Systems, 2004
Algorithm 5.3.2.
The Schur Algorithm for e A .
Input. A ∈ ℝ nxn
Output. e A .
- Step 1.
-
Transform A to R, an upper triangular matrix, using the QR iteration algorithm (Chapter 4):
(Note that when the eigenvalues of A are all real, the RSF is upper triangular.)
- Step 2.
-
Compute e R = G = (g ij ):
For i = 1,…,n do
g ii = e r ii
End
For k = 1, 2,…,n − 1 do
For i = 1, 2,…,n – k do
Set j = i + k
End
End
- Step 3.
-
Compute e A = Pe R P T.
Flop-count: Computation of e R in Step 2 requires about (2n 3/3) flops.
MATCONTROL note: Algorithm 5.3.2 has been implemented in MATCONTROL function expmschr.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780122035906500094
Linear Equations and Eigensystems
George Lindfield , John Penny , in Numerical Methods (Fourth Edition), 2019
2.9 QR Decomposition
We have seen how a square matrix can be decomposed or factorized into the product of a lower and an upper triangular matrix by the use of elementary row operations. An alternative decomposition of A is into an upper triangular matrix and an orthogonal matrix if A is real, or into an upper triangular matrix and a unitary matrix if A is complex. This is called QR decomposition. Thus
where R is the upper triangular matrix and Q is the orthogonal, or the unitary matrix. If Q is orthogonal, , and if Q is unitary, . The preceding properties are very useful.
There are several procedures which provide QR decomposition; here we present Householder's method. To decompose a real matrix, Householder's method begins by defining a matrix P thus:
(2.21)
where w is a column vector and P is symmetrical matrix. Provided , P is also orthogonal. The orthogonality can easily be verified by expanding the product as follows:
since .
To decompose A into QR, we begin by forming the vector from the coefficients of the first column of A as follows:
where
By substituting for and in it can be verified that the necessary orthogonality condition, , is satisfied. Substituting into (2.21) we generate an orthogonal matrix .
The matrix is now created from the product . It can easily be verified that all elements in the first column of are zero except for the element on the leading diagonal which is equal to . Thus,
In the matrix , + indicates a non-zero element.
We now begin the second stage of the orthogonalization process by forming from the coefficients of the second column of thus:
where are the coefficients of A and
Then the orthogonal matrix is generated from
The matrix is then created from the product as follows:
Note that has zero elements in its first two columns except for the elements on and above the leading diagonal. We can continue this process times until we obtain an upper triangular matrix R. Thus,
(2.22)
Note that since is orthogonal, the product ... is also orthogonal.
We wish to determine the orthogonal matrix Q such that . Thus or . Hence, from (2.22),
Apart from the signs associated with the columns of Q and the rows of R, the decomposition is unique. These signs are dependent on whether the positive or negative square root is taken in determining , , etc. Complete decomposition of the matrix requires multiplications and n square roots. To illustrate this procedure consider the decomposition of the matrix
Thus,
Using (2.21) we generate and hence thus:
Note that we have reduced the elements of the first column of below the leading diagonal to zero. We continue with the second stage thus:
Note that we have now reduced the first two columns of below the leading diagonal to zero. This completes the process to determine the upper triangular matrix R. Finally, we determine the orthogonal matrix Q as follows:
It is not necessary for the reader to carry out the preceding calculations since Matlab provides the function qr to carry out this decomposition. For example,
>> A = [4 -2 7;6 2 -3;3 4 4]
A =
4 -2 7
6 2 -3
3 4 4
>> [Q R] = qr(A)
Q =
-0.5121 0.6852 0.5179
-0.7682 -0.0958 -0.6330
-0.3841 -0.7220 0.5754
R =
-7.8102 -2.0486 -2.8168
0 -4.4501 2.1956
0 0 7.8259
Note that the matrices Q and R in the Matlab output are the negative of the hand calculations of Q and R above. This is not significant since their product is equal to A, and in the multiplication, the signs cancel.
One advantage of QR decomposition is that it can be applied to non-square matrices, decomposing an matrix into an orthogonal matrix and an upper triangular matrix. Note that if , the decomposition is not unique.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128122563000117
Determinants and Eigenvalues
Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fifth Edition), 2016
Calculating the Determinant by Row Reduction
We will now illustrate how to use row operations to calculate the determinant of a given matrix A by finding an upper triangular matrix B that is row equivalent to A.
Example 4
Let
We row reduce A to upper triangular form, as follows, keeping track of the effect on the determinant at each step:
A more convenient method of calculating is to create a variable P (for "product") with initial value 1, and update P appropriately as each row operation is performed. That is, we replace the current value of P by
Of course, Type (II) row operations do not affect the determinant. Then, using the final value of P, we can solve for |A| using , where B is the upper triangular result of the row reduction process. This method is illustrated in the next example.
Example 5
Let us redo the calculation for in Example 4. We create a variable P and initialize P to 1. Listed below are the row operations used in that example to convert A into upper triangular form B, with After each operation, we update the value of P accordingly.
Row Operation | Effect | P |
---|---|---|
(III): | Multiply P by − 1 | − 1 |
(II): | No change | − 1 |
(I): | Multiply P by | |
(II): | No change |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128008539000037
Additional Applications
Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fifth Edition), 2016
Exercises for Section 8.10
- 1.
-
In each part of this exercise, a quadratic form Q: is given. Find an upper triangular matrix C and a symmetric matrix A such that, for every Q(x) =x T Cx =x T Ax.
- ⋆(a)
-
Q([x,y]) = 8x 2 − 9y 2 + 12xy
- (b)
-
Q([x,y]) = 7x 2 + 11y 2 − 17xy
- ⋆(c)
-
- 2.
-
In each part of this exercise, use the Quadratic Form Method to diagonalize the given quadratic form Q: . Your answers should include the matrices A, P, and D defined in that method, as well as the orthonormal basis B. Finally, calculate Q(x) for the given vector x in the following two different ways: first, using the given formula for Q, and second, calculating where [x] B =P −1 x and D =P −1 AP.
- ⋆(a)
-
Q([x,y]) = 43x 2 + 57y 2 − 48xy;x = [1,−8]
- (b)
-
x = [7,−2,1]
- ⋆(c)
-
x = [4,−3,6]
- (d)
-
x = [5,9,−3,−2]
- 3.
-
Let Q: be a quadratic form, and let A and B be symmetric matrices such that Q(x) =x T Ax =x T Bx. Prove that A = B (the uniqueness assertion from Theorem 8.14). (Hint: Use x =e i to show that a ii = b ii . Then use x =e i +e j to prove that a ij = b ij when i≠j.)
- ⋆4.
-
Let Q: be a quadratic form. Is the upper triangular representation for Q necessarily unique? That is, if C 1 and C 2 are upper triangular n × n matrices with Q(x) =x T C 1 x =x T C 2 x, for all , must C 1 =C 2? Prove your answer.
- 5.
-
A quadratic form Q(x) on is positive definite if and only if both of the following conditions hold:
- (i)
-
Q(x) ≥ 0, for all .
- (ii)
-
Q(x) = 0 if and only if x = 0.
A quadratic form having only property (i) is said to be positive semidefinite.
Let Q be a quadratic form on , and let A be the symmetric matrix such that Q(x) =x T Ax.
- (a)
-
Prove that Q is positive definite if and only if every eigenvalue of A is positive.
- (b)
-
Prove that Q is positive semidefinite if and only if every eigenvalue of A is nonnegative.
- ⋆6.
-
True or False:
- (a)
-
If Q(x) =x T Cx is a quadratic form, and , then Q(x) =x T Ax.
- (b)
-
Q(x,y) = xy is not a quadratic form because it has no x 2 or y 2 terms.
- (c)
-
If x T Ax =x T Bx for every , then A = B.
- (d)
-
Every quadratic form can be diagonalized.
- (e)
-
If A is a symmetric matrix and Q(x) =x T Ax is a quadratic form that diagonalizes to , then the main diagonal entries of D are the eigenvalues of A.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128008539000086
Find Bases That Gives Upper Triangular Matrix Example
Source: https://www.sciencedirect.com/topics/mathematics/upper-triangular-matrix
0 Response to "Find Bases That Gives Upper Triangular Matrix Example"
Post a Comment