讲解MATLAB程序设计、辅导data编程、MATLAB语言编程调试

- 首页 >> Java编程
MATLAB PROJECT 4
Please read the Instructions located on the Assignments page prior
to working on the Project.
BEGIN with creating a Live Script Project4.
Note: All exercises in this project have to be completed in the Live Script using the Live
Editor. Please refer to the MATLAB video that explains how to use the Live Script:
https://www.mathworks.com/videos/using-the-live-editor-117940.html?s_tid=srchtitle
The final script has to be generated by exporting the Live Script to PDF.
Each exercise has to begin with the line
Exercise#
You should also mark down the parts such as (a), (b), (c), and etc. This makes grading easier.
Important: we use the default format short for the numbers in all exercises unless it is
specified otherwise. We do not employ format rat since it may cause problems with
running the codes and displaying matrices in the Live Script. If format long had been used
in an exercise, please make sure to return to the default format in the next exercise.
Part I. Evaluating Eigenvalues
Exercise 1 (4 points): Difficulty: Easy
In this exercise, you will calculate the eigenvalues of a matrix by using various methods and
functions – one of the functions, quer, was created earlier in Exercise 4 of Project 3, and the
other functions will be MATLAB built-in functions.
NOTE: The function quer from Project 3 can be used for approximating eigenvalues by
consecutive iterations, but not for all matrices the process converges. For computing
eigenvalues in MATLAB, especially complex eigenvalues, it is advisable to use a MATLAB
built-in function eig().
**Create a function that begins with
function [q,e,c,S]=eigenval(A)
format
The input is a square matrix A. The outputs will be specified below when you proceed with
the programming.
**First Method: your function will calculate the eigenvalues by the QR factorization. Include
in your code for eigenval those lines of the function quer that deal with the case when m=n
and make the following adjustments:
(1) Increase the accuracy from p=7 to p=12 and output and display the number of iterations k
needed to archive this accuracy:
fprintf('the number of iterations is %i\n',k)
(2) Output the vector of the “sorted” diagonal entries of the matrix B, which is the last
iteration in the sequence, and assign it to the vector q. Also, convert to a zero the entries of the
vector q that are in the range of 10^(-7) from a zero. The outputs for this part should be:
2
q=sort(diag(B));
disp('The eigenvalues of A from QR factorization are:')
q=closetozeroroundoff(q,7)
Note: Function sort, applied to a real vector, re-arranges its entries in the ascending order.
**Second Method: you will calculate the eigenvalues of A using a built-in MATLAB function
eig().The outputs for this part will be:
e=eig(A);
e=sort(e,'ascend','ComparisonMethod','real');
disp('The eigenvalues of A from a MATLAB function are:')
e=closetozeroroundoff(e,7)
**Third Method: there is a built-in MATLAB function poly(A) that outputs the vector of the
coefficients of the characteristic polynomial of A written in the descending order according to
the degree. Output the vector
S=poly(A);
and, then, convert to zero the entries of S that are in the range of 10^(-7) from 0 – assign this
vector to S again (do not display S).
Next, output the characteristic polynomial of A in a symbolic form as indicated below:
disp('the MATLAB characteristic polynomial of A is:')
Q=poly2sym(S)
Continue working with the vector S: use the MATLAB function roots(S) to output the
vector of zeros of the polynomial whose coefficients form the vector S. “Sort” this vectors in
the ascending order with respect to the real part and assign it to c. Since c is the vector of
zeros of the characteristic polynomial, it is also the vector of the eigenvalues of A. Your
outputs for this part will be as below:
c=roots(S);
c=sort(c,'ascend','ComparisonMethod','real');
disp('The eigenvalues from the MATLAB characteristic polynomial are:')
c=closetozeroroundoff(c,7)
**Then, we will need to see how close the vectors q, e, and c are to each other.
Output and display the three 2-norms:
eq=norm(e-q)
cq=norm(c-q)
ec=norm(e-c)
**Next, we will use the vector of the eigenvalues e to build a polynomial whose zeros are the
entries of e, that is, to construct the characteristic polynomial of A. We will run a MATLAB
function poly(e) that returns the vector of the coefficients of the polynomial given the vector
e of its zeros. Your outputs here will be the characteristic polynomial R of the matrix A,
written in a symbolic form, and a message (see below):
disp('the constructed characteristic polynomial of A is:')
R=poly2sym(poly(e))
**Finally, you will chick if the symbolic polynomials Q and R are the same: run a logical
comparison statement and, if Q and R match, code a message:
disp('Yes! Q and R are the same!')
If they don’t, a message could be something like:
disp('What?! Q and R do not match?')
3
**Print the functions closetozeroroundoff, jord, and eigenval in your Live Script.
Note: function jord was created in Exercise 1 of Project 1.
**Run the function eigenval(A); as indicated below:
%(a)
A=[3 3;0 3]
[q,e,c,S]=eigenval(A);
%(b)
A=jord(5,4)
[q,e,c,S]=eigenval(A);
%(c)
A=ones(5);
[q,e,c,S]=eigenval(A);
%(d)
A=tril(magic(4))
[q,e,c,S]=eigenval(A);
%(e)
A=triu(magic(4),-1)
[q,e,c,S]=eigenval(A);
%(f)
A=[4 3 0 0; 7 10 3 0;0 3 1 5;0 0 6 7]
[q,e,c,S]=eigenval(A);
% Analyze the outputs eq, cq, ec after running the function eigenval on the matrices (a)-
(f) and write a comment whether the QR factorization gives a good approximation of the
eigenvalues of these matrices.
BONUS! (2 points)
Insert Section Break and run the part below as a separate Section.
**Input the matrix and run the function:
A=[4 0 0 0; 1 3 0 0; 0 -1 3 0; 0 0 5 4]
[q,e,c,S]=eigenval(A);
(It may take some time for the outputs to be displayed – please be patient!)
The last output message should indicate that the polynomials Q and R are not equal. You will
need to find out how close to each other are S and poly(e), which are the vectors of
coefficients of the polynomials Q and R, respectively. Proceed as follows:
** Initialize
p=1;
and set up a “while” loop in your Live Script using the variable p and the function
closetozeroroundoff(S-poly(e),p)
to output the largest value of p (p is a positive integer) for which the function
closetozeroroundoff outputs the zero vector.
Code a message that will display the number of the matching decimals in the vectors of the
coefficients of polynomials Q and R.
4
Part II. Eigenvectors & Diagonalization
Exercise 2 (5 points) Difficulty: Hard
In this exercise, you will work with the eigenvalues of an n n × matrix A. First, you will find
all eigenvalues of A. Then, you will consider the distinct eigenvalues and find orthonormal
bases for the corresponding eigenspaces and the dimensions of the eigenspaces. Next, you will
check if A is diagonalizable by applying the general theory. If a matrix A is diagonalizable,
the output has to contain an invertible matrix P and a diagonal matrix D, such that, 1 A PDP− = ,
or, equivalently, AP PD = and P is invertible. For diagonalizable matrices, you will run a
built-in MATLAB function, which also performs diagonalization, and you will compare its
outputs with the outputs P and D of your function.
**Create a function in MATLAB that begins with
function [L,P,D]=eigen(A)
format
[~,n]=size(A);
P=[];
D=[];
Your function [L,P,D]=eigen(A) will have a set of commands that generates the following
outputs for an n n × input matrix A.
Part 1. Output the vector L of all eigenvalues of A.
To do that, you will go through several steps. Please suppress all intermediate outputs and
display only the final output vector L.
**Output a row vector L of all eigenvalues, where each eigenvalue repeats as many times as
its multiplicity. A basic MATLAB command for this part
L=eig(A);
returns a column vector of all eigenvalues of A, and we use the function
L=transpose(L);
to get a row vector.
The input matrices have real eigenvalues; however, it is possible that the MATLAB command
eig() outputs them as complex numbers with negligibly small imaginary parts – to eliminate
this discrepancy, we will need to run the function
L=real(L);
on the vector L.
Then, we will “sort” the entries of L using the MATLAB command
L=sort(L);
To output vector L correctly, you will go through two more steps:
(1) In your code, you will need to ensure that the multiple eigenvalues show as the same
number. Use the function closetozeroroundoff with p = 7 to compare the eigenvalues and
assign the same value to the ones that are within 10^(-7) tolerance of each other. To perform
this task, you can go through the sequence of the sorted eigenvalues and, if there is a set of
eigenvalues that are within 10^(-7) from each other, you will assign the first eigenvalue that
comes across to all others in that set.
Important Note: here we work with the eigenvalues that are stored in MATLAB – we do not
round off any eigenvalues on this step.
5
(2) A singular matrix A has a zero eigenvalue; however, when running your code, a zero
eigenvalue may not show as a zero due to round-off errors in MATLAB.
If a matrix A is not invertible, you will need to run the function closetozeroroundoff with
p = 7 on the vector of eigenvalues L to ensure that the zero eigenvalues show as a zero, and
assign the output to L again. To check whether a matrix A is invertible or not, please use a
MATLAB command rank().
After performing all of the tasks outlined above, you will display your final output L with a
message:
fprintf('all eigenvalues of A are\n')
display(L)
Please make sure that L is a row vector of the “sorted” real eigenvalues of A where all
multiple eigenvalues are equal to each other and the zero eigenvalues (for singular matrices)
show as a zero.
Part 2. Construct an orthonormal basis W for each eigenspace.
**Output and display with a message a row vector M of all distinct eigenvalues (no repetitions
are allowed). The MATLAB command unique() can be used here.
For each distinct eigenvalue M(i), your function has to do the following:
**Find the multiplicity, m(i), of the eigenvalue M(i). Output it with the message:
fprintf('eigenvalue %d has multiplicity %i\n',M(i),m(i))
**Output an orthonormal basis W for the eigenspace for an eigenvalue M(i). Display it with
the message:
fprintf('a basis for the eigenspace for lambda=%d is:\n',M(i))
display(W)
Note: to output an orthonormal basis for the null space of a matrix, use a MATLAB command
null().
**Calculate the dimension d(i) of W. Output it with the message:
fprintf('dimension of eigenspace for lambda = %d is %i\n',M(i),d(i))
Part 3. Construct a Diagonalization when possible.
For help with this part, please review the material of Lecture 24.
**First, determine if A is diagonalizable: for each eigenvalue M(i), you will compare the
multiplicity, m(i), with the dimension of the corresponding eigenspace, d(i).
**If A is not diagonalizable, output a corresponding message and terminate the program – the
empty outputs for P and D will stay.
**If A is diagonalizable, output a corresponding message, and your function will continue
with constructing a diagonalization.
Output and display a matrix P constructed by combining the bases W for the eigenspaces.
Output and display the diagonal matrix D with the corresponding eigenvalues on its main
diagonal.
Verify that your diagonalization is correct, that is, both conditions hold: AP=PD and P is
invertible. To verify the first condition, you will need to use the function
closetozeroroundoff with p = 7. To verify the second condition, please use the command
rank().
6
If both conditions hold, output a message: 'Great! I got a diagonalization!'
Otherwise, output a message: 'Oops! I got a bug in my code!' and terminate the
program.
Part 4. Compare your outputs with the outputs of a built-in MATLAB function.
**If the diagonalization is confirmed, your code will continue with the following task.
There is a MATLAB built-in function that runs as
[U,V]=eig(A)
which, for a diagonalizable matrix A, generates a diagonal matrix V, with the eigenvalues of
A on its main diagonal, and an n n × invertible matrix U of the corresponding eigenvectors,
such that AU=UV. Place this function in your code and display the outputs U and V.
**Compare the output matrix U with the matrix P: write a set of commands that would check
on the following case:
P and U have either the same columns or the columns match up to a scalar (-1), where
the order of the columns does not matter.
If it is the case, output a message 'Columns of P and U are the same or match up to
scalar (-1)'.
If it is not the case (which is possible), the output message could be 'There is no specific
match among the columns of P and U'
Note: You will need to use the function closetozeroroundoff with p = 7 when comparing
the columns of the matrices P and U.
**Verify that the matrices D and V have the same diagonal elements (the order does not count
either). If it is the case, output a message 'The diagonal elements of D and V match'. If
it is not the case, output something like: 'That cannot be true!'
Hint: To perform this task, you can “sort” the diagonal elements of V and compare them with
the vector L using the function closetozeroroundoff with p = 7.
This is the end of the function eigen.
BONUS! (1 point)
Theory: A matrix A is said to be orthogonally diagonalizable if there exists an orthogonal
matrix P ( 1 T P P − = ) and a diagonal matrix D, such that, 1 A PDP− = , or equivalently,
T A PDP = .
Theorem: An n n × matrix A is orthogonally diagonalizable if and only if A is symmetric.
(For more information please read Section 7.1 of the textbook.)
Add a set of commands to your function eigen for a diagonalizable matrix A that, first, will
check if A is symmetric.
**If A is not symmetric, output a message:
disp('diagonalizable matrix A is not orthogonally diagonalizable')
**If A is symmetric, output a message
disp('matrix A is symmetric')
and verify the orthogonal diagonalization: check if the matrix P is orthogonal (you will need
to use here the function closetozeroroundoff with p = 7).
If the orthogonal diagonalization is confirmed, output a message:
7
disp('the orthogonal diagonalization is confirmed')
otherwise, output something like:
disp('Wow! A symmetric matrix is not orthogonally diagonalizable?!')
This is the end of your function eigen with the BONUS part if included.
**Print the functions eigen, closetozeroroundoff, jord in the Live Script.
Note: the function jord was created in Exercise 1 of Project 1.
**Run the function [L,P,D]=eigen(A); as indicated below:
%(a)
A=[3 3; 0 3]
[L,P,D]=eigen(A);
%(b)
A=[2 4 3;-4 -6 -3;3 3 1]
[L,P,D]=eigen(A);
%(c)
A=[4 0 1 0; 0 4 0 1; 1 0 4 0; 0 1 0 4]
[L,P,D]=eigen(A);
%(d)
A=jord(5,4);
[L,P,D]=eigen(A);
%(e)
A=ones(4)
[L,P,D]=eigen(A);
%(f)
A=[4 1 3 1;1 4 1 3;3 1 4 1;1 3 1 4]
[L,P,D]=eigen(A);
%(g)
A=[3 1 1;1 3 1;1 1 3]
[L,P,D]=eigen(A);
%(h)
A=magic(4)
[L,P,D]=eigen(A);
%(k)
A=magic(5)
[L,P,D]=eigen(A);
%(l)
A=hilb(5)
[L,P,D]=eigen(A);
Part III. Orthogonal Projections & Least-Squares Solutions
Exercise 3 (4 points) Difficulty: Moderate
In this exercise, you will create a function proj(A,b)which will work with the projection of a
vector b onto the Column Space of an m n × matrix A.
For a help with this exercise, please refer to Lectures 28-31.
8
Theory: When the columns of A are linearly independent, a vector p is the orthogonal
projection of a vector b∉ Col A onto the Col A if and only if p x = Aˆ , where xˆ is the unique
least-squares solution of Ax b = , or equivalently, the unique solution of the normal equations
T T AA A x b = .
Also, if p is the orthogonal projection of a vector b onto Col A, then there exists a unique
vector z, such that, z bp = − and z is orthogonal to Col A.
**Your program should allow a possibility that the columns of A are not linearly independent.
In order for the algorithm to work, we will use the function shrink() and generate a new
matrix, which will be assigned to A, whose columns form a basis for Col A. The function
shrink() was created in Exercise 2 of Project 3 and you should have it in your Current
Folder in MATLAB. If not, please see the code below:
function B=shrink(A)
[~,pivot]=rref(A);
B=A(:,pivot);
end
**Create your function which begins with the lines:
function [p,z]=proj(A,b)
format
A=shrink(A);
m=size(A,1);
p=[];
z=[];
The inputs are an m n × matrix A and a column vector b. Your outputs p and z will be either
empty (when the sizes of A and b do not match) or the projection of b onto the Col A and the
component of b orthogonal to the Col A, respectively.
**First, your function has to check whether the input vector b has exactly m entries, where m
is the number of rows in A. If it doesn’t, the program terminates with a message 'No
solution: sizes of A and b disagree', and the empty outputs for p and z will stay.
If b has exactly m entries, we proceed to the next step:
Determine whether it is the case that b∈ Col A – use here a MATLAB function rank().
**If b∈ Col A, output a message 'b is in Col A' and make assignments to the vectors p
and z based on the general theory (no computations are needed). Display the vectors p and z.
After that, the program terminates.
**If b∉ Col A, your function will continue with the following tasks:
Determine whether it is the case that b is orthogonal to Col A. Due to the round-off errors in
MATLAB computations, you will need to use the function closetozeroroundoff with p = 7
to check if this condition holds. If b is orthogonal to Col A, output a message 'b is
orthogonal to Col A' and make assignments to the vectors p and z based on the general
theory (no computations are needed either). Display the vectors p and z. After that, the
program terminates.
If b is not orthogonal to Col A, your function will continue:
9
**Find the least-squares solution x as the solution of the normal equations by using the
inverse matrix or the backslash operator, \ . Output it with a message:
fprintf('the least squares solution of inconsistent system Ax=b is')
display(x)
**Then, output a vector (do not display it)
x1=A\b;
and verify by using the function closetozeroroundoff with p=12 that the vectors x and x1
match within the given precision. If your code confirms it, display a message:
disp('A\b returns the least-squares solution of inconsistent system Ax=b')
**Next, output the projection p by using the least-squares solution x. Display p with the
message:
disp('the projection of b onto Col A is')
display(p)
**Then, output vector z, which is the component of b orthogonal to the Col A. Verify in your
code whether the vector z is, indeed, orthogonal to the Col A: use the function
closetozeroroundoff with p=7 here. If your code confirms it, display z with a message:
disp('the component of b orthogonal to Col A is')
diplay(z)
Otherwise, an output message could be:
disp('Oops! Is there a bug in my code?')
and the program terminates.
**Finally, use the vector z to compute the distance, d, from b to Col A. Output d with the
message:
fprintf('the distance from b to Col A is %i',d)
Hint: use here a MATLAB built-in function norm().
This is the end of your function proj.
**Print the functions closetozeroroundoff, shrink, proj in your Live Script.
**Run the function [p,z]=proj(A,b); as indicated below:
%(a)
A=magic(4), b=(1:4)'
[p,z]=proj(A,b);
%(b)
A=magic(6), b=A(:,6)
[p,z]=proj(A,b);
%(c)
A=magic(6), b=(1:5)'
[p,z]=proj(A,b);
%(d)
A=magic(5), b = rand(5,1)
[p,z]=proj(A,b);
%(e)
A=ones(4); A(:)=1:16, b=[1;0;1;0]
[p,z]=proj(A,b);
%(f)
B=ones(4); B(:)=1:16; A=null(B,'r'), b=ones(4,1)
[p,z]=proj(A,b);
10
% Analyze the outputs in part (d) and write a comment that would explain a reason why a
random vector b belongs to the Col A.
% For part (f), based on the input matrix A and the outputs, state to which vector space the
vector b has to belong.
**You will need to run some command in your Live Script to support your responses. Supply
them with the corresponding output messages and/or comments.
Exercise 4 (4 points) Difficulty: Moderate
In this exercise, we will work with the matrix equation Ax b = , where the columns of A are
linearly independent. We find unique “exact” solution for a consistent system (b∈Col A) and
unique least-squares solution for an inconsistent system (b∉Col A).
Please read the section Theory carefully. You may also find it helpful to review the material
of Lectures 28-31 prior to working on this exercise.
Theory: When solving in MATLAB a general linear system Ax b = with a matrix A whose
columns are linearly independent, we can always find the solution of a consistent system or
the least-squares solution of an inconsistent system by using the MATLAB backslash
operator, \ .
When matrix A has orthonormal columns, we can find the “exact” solution of a consistent
system using the Decomposition Theorem (Lecture 28) or the least squares-solution of an
inconsistent system using the Statement in Lecture 31 – both, the Theorem and the Statement,
are particular cases of the more general Orthogonal Decomposition Theorem (Lecture 29) and
both employ the same formulas for computing the entries of the solutions.
Recall: An m n × matrix A has an orthonormal set of columns if and only if A’*A=eye(n).
If a matrix A does not have orthonormal columns, we can, first, generate an orthonormal basis
U for Col A, then, find the projection b1 of the vector b onto the Col A, and, finally, calculate
the “exact” or the least-squares solution of a system Ax=b as the solution of a consistent
system Ax=b1. This method may be effective for calculating the least-squares solution, and,
for a purpose of an exercise, it can be also extended to a calculation of the “exact” solution,
which would unify an approach to solving a linear system in general.
Note: when b is in the Col A, that is, the system Ax=b is consistent, the projection of b onto
the Col A is the vector b itself, thus, the system Ax=b1 becomes Ax=b.
Recall: The projection b1 of b onto the Col A can be computed by the formula b1=U*U’*b,
where U is a matrix whose columns form an orthonormal basis for Col A.
Moreover, if x1 is the solution or the least-squares solution of the system Ax=b, that is,
A*x1=b1, we can compute the least-squares error, n1, of approximation of the vector b by
the elements of the Col A, where n1=norm(b-A*x1), or equivalently, n1=norm(b-b1).
Geometrically, n1 is the distance from b to the Col A (see Exercise 3 of this Project).
**Create a function in MATLAB that begins with
function [x1,x2]=solveall(A,b)
format compact
format long
[m,n]=size(A);
11
The inputs are an m n × matrix A, whose columns are linearly independent, and a vector m b∈ . The two outputs x1 and x2 are either two solutions of a consistent system or two
least-squares solutions of an inconsistent system calculated in two different ways.
**Continue your function with a conditional statement that has to determine if b∈Col A or
not, that is, whether the system is consistent or not – use the MATLAB function rank() in
this part. If the system is consistent, output a message:
disp('the system is consistent - find the "exact" solution')
otherwise, output a message:
disp('the system is inconsistent - find the least-squares solution')
**Next, calculate the solution or the least-squares solution of the system Ax=b, vector x1, by
using the backslash operator, \ . Output and display it with a message as indicated below:
x1=A\b;
disp('the solution calculated by the backslash operator is')
display(x1)
Notice that the type of the solution has been stated in the previous message.
**Then, determine whether the matrix A has orthonormal columns – you will need to use here
the function closetozeroroundoff() with p = 7.
If matrix A has orthonormal columns, output a message:
disp('A has orthonormal columns')
and find the solution (or the least-squares solution) x2 using the Orthogonal Decomposition
Theorem (Lecture 29), which is more general form of the Decomposition Theorem (Lecture
28) and the Statement in Lecture 31 (see Theory above).
Hint: You can either use a “for loop” to calculate the entries of the vector x2, one by one, or
you can employ the transpose of A to output x2 at once by a MATLAB operation (second way
is preferable – remember, we work with a matrix whose columns form an orthonormal set).
Display the solution x2 with a message:
disp('solution calculated by the Orthogonal Decomposition Theorem is')
display(x2)
**If matrix A does not have orthonormal columns, output a message:
disp('A does not have orthonormal columns')
and find the solution (or the least-squares solution) of Ax b = by, first, calculating the
projection of the vector b onto the Col A. Proceed as follows:
**Run the MATLAB function orth() to output and display a matrix U whose columns form
an orthonormal basis for the Col A supplied with a message:
disp('an orthonormal basis for Col A is')
U=orth(A)
**Next, using the created orthonormal basis U for Col A, calculate the projection b1 of the
vector b onto the Col A (see Theory above). Display vector b1 with a message:
disp('the projection of b onto Col A is')
display(b1)
**Then, output the solution (or the least-squares solution) x2 as the “exact” solution of the
equation Ax = b1 (use the backslash operator). Display x2 with the message:
disp('the solution calculated using the projection b1 is')
display(x2)
12
**Verify that the solution x2 is calculated correctly: use a conditional statement and the
function closetozeroroundoff()with p = 12 that you will run on the vector x1-x2. If your
code confirms that the solutions calculated by two different methods match, code a message:
disp('solutions x1 and x2 are sufficiently close to each other')
Otherwise, your message could be
disp('Check the code!')
and the program terminates.
**If the code passes the check point above, your function will continue with a few more tasks:
Use the solution x1 to output the least-squares error n1 of approximation of b by the
elements of the Col A (see Theory above). Display n1 with a message:
disp('least-squares error of approximation of b by elements of Col A is')
display(n1)
**Next, input a vector
x=rand(n,1);
and compute the error of approximation, n2, of the vector b by a vector A*x of the Col A (for
a random vector x). Output and display it as below:
n2=norm(b-A*x);
disp('an error of approximation of b by A*x of Col A for a random x is')
display(n2)
This is the end of your function solveall.
**Print the functions closetozeroroundoff, shrink, solveall in your Live Script.
Note: the function shrink was created in Project 2. We use it here to input some matrices. If
you do not have it in your Current Folder, please see the code in Exercise3 of this Project.
**Run the function [x1,x2]=solveall(A,b); as indicated below:
%(a)
A=magic(4); b=A(:,4), A=orth(A)
[x1,x2]=solveall(A,b);
%(b)
A=magic(5), b=rand(5,1)
[x1,x2]=solveall(A,b);
%(c)
A=magic(4); A=shrink(A), b=ones(4,1)
[x1,x2]=solveall(A,b);
%(d)
A=magic(4); A=shrink(A), b=rand(4,1)
[x1,x2]=solveall(A,b);
%(e)
A=magic(6); A=orth(A), b=rand(6,1)
[x1,x2]=solveall(A,b);
% Analyze the outputs n1 and n2 for both consistent and inconsistent systems. Based on the
geometrical meaning of the least-squares error n1 explain the difference between the forms of
the outputs n1 and n2 for consistent systems vs. inconsistent systems.
% Compare n1 and n2 for each inconsistent system and write a comment whether the leastsquares
solution, x1 (or x2) may, indeed, minimize the distance between the vector b and the
vectors Ax of the Col A (which has to be true according to the general theory).
13
Part IV. Orthogonalization by Gram-Schmidt with Normalization
According to the theory, we can create an orthonormal basis for the column space of a matrix
by using the Gram-Schmidt process. However, a direct computer realization of the GramSchmidt
process involves round-off errors that build up when the vectors are calculated, one
by one. In this exercise, we will be programming a Modified Gram-Schmidt process, which
involves normalization, and we may need to run the algorithm several times until the required
accuracy is archived.
Exercise 5 (5 points) Difficulty: Hard
**Create a function that begins with the following lines:
function U=orthonorm(A)
format compact
format long
The input is an m x n matrix A. The final output U will be a matrix whose columns form an
orthonormal basis for Col A satisfying the required accuracy.
We begin with generating a basis X for the Col A, which will be represented by the matrix X.
In general, the columns of A span Col A, but they may not be linearly independent, thus, it is
possible that not all columns of A will be in a basis X for Col A.
**If the set of columns of A is linearly independent, we assign to X matrix A itself and
display a basis X with a message:
disp('a basis X for Col A is formed by all columns of A')
X=A
**Otherwise, we “shrink” the set of columns of A into a basis for Col A and assign it to X. In
this case, we display both a basis X and the indexes of the pivot columns of A as below:
disp('a basis X for Col A is formed by the columns of A with indexes')
X=shrink_1(A)
where the function shrink_1 is a modification of the function shrink constructed in the way
that, along with a basis for the Col A, it also displays the indexes of the pivot columns of A
(see Exercise 2 of Project 3). You will need to create and save the function shrink_1 in your
Current Folder in MATLAB.
**Next, assign (do not display):
U=X;
n=size(U,2);
and continue with a check if the columns of U (or X) already form an orthonormal basis for
Col A with the accuracy p=15, that is, if the function
closetozeroroundoff(U'*U-eye(n),15)
returns the zero matrix.
If it is the case, display U with a message as below:
disp('a required orthonormal basis U for Col A is X')
display(U)
and your code terminates here.
If the columns of U do not form the required orthonormal basis for Col A, we proceed with
orthogonalization by Gram-Schmidt along with normalization. It is possible that running this
process just one time will not create an orthonormal basis U with the required accuracy
14
(p=15). In this case, we will need to re-run the process on the created matrix U and repeat it as
many times as needed until the required accuracy is archived. We proceed as follows.
**Initialize
count=1;
and set up an exterior “while” loop using the variable count that will keep track on how many
times we need to run the process of orthonormalization.
**Within the exterior loop, set up a “for” loop and use the formulas for Gram-Schmidt
orthogonalization (Lecture 30) to recalculate the columns of the matrix U, one by one, making
the following adjustments: after completing a consecutive column, you will need to normalize
it (use a MATLAB function norm()).
Hint: here we need to employ MATLAB operations such as matrix-vector multiplication
keeping in mind that we are working with orthonormal sets of vectors – this will significantly
simplify the general formulas for Gram-Schmidt process.
**After you have generated the first iteration for U, check if U is constructed correctly. You
will need to verify if the following conditions, (1) and (2), hold.
Note: we check these conditions only once after the first iteration, that is, when count==1.
(1) The columns of U form a basis for Col A.
To check this condition, you can either use the MATLAB function rank() or the function
shrink() and the matrix X whose columns form a basis for Col A.
If your matrix U does not pass this check, program a message:
disp('U is not a basis for Col A? Check the code!')
and terminate the program.
(2) The columns of U form an orthonormal set with an accuracy at least p=0.
To check this condition, use the function
closetozeroroundoff(U'*U-eye(n),0)
and, if the output of this function is not the zero matrix, program a message
disp('No orthonormalization? Check the code!')
and terminate the program.
Next, we will determine what accuracy is archived after a consecutive iteration.
Proceed as follows:
**Initialize
p=1;
and set up another interior nested loop (with the variable p) that will run while the two
conditions, composed in the way outlined below, hold:
if closetozeroroundoff(U'*U-eye(n),p)==0
check if p<15
When the second condition becomes false, the interior loop breaks with a message:
fprintf('accuracy after %i orthonormalisation(s) is at least %d\n',count,p)
Note: this condition is needed to set a reasonable bound on the accuracy that can be
considered.
If both conditions hold, assign
p=p+1;
and your code will go through the interior loop again.
After this interior nested loop breaks, it outputs p which is an accuracy of the orthonormal
basis constructed after the count consecutive iteration(s).
15
If the required accuracy (p=15) has not been archived, we will generate next iteration for the
exterior loop, that is, we will re-run the orthonormalization process. Proceed as follows.
**If the output p satisfies the condition p<15
display a message
fprintf('accuracy after %i orthonormalization(s) is %d\n',count,p)
re-assign
count=count+1;
and run the exterior “while” loop again on the created matrix U.
If the condition p<15 does not hold, that is, p is at least 15, the exterior “while” loop breaks.
Note: if orthonormalization process is coded correctly, it will take just a few iterations to
obtain an orthonormal basis U with the required accuracy.
After the exterior “while” loop breaks, your code needs to display:
disp('orthonormal basis for Col A that satisfies the required accuracy is')
display(U)
fprintf('U has been generated after %i orthonormalization(s)\n',count)
This is the end of your function orthonorm.
**Print the functions closetozeroroundoff, shrink_1, orthonorm in your Live Script.
**Run the function U=orthonorm(A); as indicated below:
%(a)
A=[1 2 -1 4 1;0 1 3 -2 2;0 0 0 2 -2];
U=orthonorm(A);
%(b)
A=magic(5);
U=orthonorm(A);
%(c)
A=magic(4);
U=orthonorm(A);
%(d)
A=randi(10,4,6);
U=orthonorm(A);
%(e)
A=hilb(7);
U=orthonorm(A);
%(f)
A=hilb(9);
U=orthonorm(A);
%(g)
X=orth(hilb(5));
U=orthonorm(X);
%(h)
A=[orth(hilb(5)), ones(5)];
U=orthonorm(A);
%(k)
A=orth(hilb(7));
U=orthonorm(A);
16
Part V. Applications
In this part of the project, you will write a code that outputs the best least-squares fit
polynomial for the given data. It is defined by the parameter vector c, which is calculated by
using the design matrix X and the observation vector y, such that, the 2-norm of the
residual vector e = y – Xc is the minimum.
Theory: Assume that we are given m data points ( x y i i , ), where i m =1: . A choice of an
equation of the curve that gives the best least-squares fit to the data points is, in general,
arbitrary. In this exercise, we will be looking for the polynomial of degree n of the form
1
1 10 ... n n P cx c x cx c n n
− = + ++ + −
for some consecutive values of n starting at n=1.
You will need to generate the design matrix X within your code. Its form depends on the
degree n of the polynomial. In general,
 is the vector of the x-coordinates of the data
points. The parameter vector c is the vector of the coefficients of the polynomial of the best
fit, and it is calculated as the least-squares solution of the system Xc y = ,
is the observation vector formed by the y-coordinates of the data points. To calculate the
parameter vector c cc c = ( n n , ,..., −1 0 ), we can use one of the four methods outlined below:
The MATLAB command c=lscov(X,y)calculates the least-squares solution of the equation
Xc y = in the same way as we would find it by solving the normal equations,
( ) T T XX X c y = ,
by using either the backslash operator, c=(X'*X)\(X'*y), or the inverse matrix,
c=inv(X'*X)*(X'*y). And we can also calculate the vector c by using the backslash
operator, that is, c=X\y (see Exercise 3 of this Project).
Exercise 6 (3 points) Difficulty: Easy
**First, create the following function in a file in MATLAB (the code is below):
function []=polyplot(a,b,p)
x=(a:(b-a)/50:b)';
y=polyval(p,x);
plot(x,y);
end
This function plots on the interval [a, b] the polynomial with the row vector of the
coefficients, p, written in the descending order according to the degree, – this function will be
used in the function that you will create next.
17
**Create your function that begins with:
function [c,X,N]=lstsqpoly(x,y,n)
format
m=length(x);
The inputs in your function are: a vector x of the x-coordinates of the data points, a vector y
(observation vector) of the y-coordinates of the data points, and the degree n of the
polynomial of the best least-squares fit. The outputs are: the parameter vector c, the design
matrix X, and the 2-norm of the residual vector e the number N.
**Continue your function lstsqpoly with the following:
(1) Assign to the endpoints of the interval, numbers a and b which are inputs of the function
polyplot, the first and the last entry of the vector x, respectively.
(2) Output the m n × + ( 1) design matrix X (see Theory above) and display it with the message:
disp('the design matrix is')
display(X)
Hint: you can use a “for” loop or a vectorized statement to calculate the columns of X.
(3) Output the (column) vector c, which is the least-squares solution of the system Xc=y,
calculated by one of the 4 methods outlined in the Theory above, and display it with a
message:
disp('the parameter vector is')
display(c)
(4) Output the 2-norm of the residual vector e and display it with a message:
disp('the norm of the residual vector is')
N=norm(y-X*c)
(5) include the following two lines in your code:
plot(x,y,'*'),hold on
polyplot(a,b,c’)
which will plot the data points and the polynomial of the best least-squares fit, respectively.
(6) Output the polynomial of the best least-squares fit with a message as indicated below:
fprintf('the polynomial of degree %i of the best least-squares fit is\n',n)
P=poly2sym(c)
(7) Complete your function with the last command
hold off
**Print the functions polyplot and lstsqpoly in your Live Script.
**Input the vectors
x = [0;1;2;3;5;6], y = [1;2;4;3;4;5]
and run the function [c,X,N]=lstsqpoly(x,y,n); as indicated below:
n=1;
[c,X,N]=lstsqpoly(x,y,n);
n=2;
[c,X,N]=lstsqpoly(x,y,n);
n=3;
[c,X,N]=lstsqpoly(x,y,n);
n=4;
[c,X,N]=lstsqpoly(x,y,n);
n=5;
[c,X,N]=lstsqpoly(x,y,n);
BONUS! (1 point)
% Analyze the outputs for n=5. Justify a reason why the best least-squares fit polynomial of
degree 5, in practice, interpolates the 6 data points.
18
Part V. Applications to Differential Equations
In this part, you will work with an application of eigenvalues and eigenvectors to the
continuous analogues of difference equations, specifically, to the dynamical systems
involving differential equations.
Exercise 7 (5 points) Difficulty: Hard
Theory: Suppose 1 2 ( , ,..., ) n x = xx x is a vector of differentiable functions of a single variable t
with the vector of its derivatives 1 2 ( , ,..., ) n xx x ′′ ′ x′ = . Suppose also that A is an n n × matrix
and x0 is a (numerical) n×1 vector.
The initial value problem for the system of differential equations defined by the matrix A and
the initial vector x0 can be written in the form:
x x ′(t At ) = ( ) , x(0) = x0 (1)
From the general theory it follows that, if a matrix A is diagonalizable, that is, A has n linearly
independent eigenvectors, the general solution of the equation x x ′(t At ) = ( ) has a form
( ) 1 2
11 2 2 ... n t t t
n n t ce c e c e λ λ λ xv v v = + ++ (2)
where 1 2 , ,..., λλ λn are real eigenvalues of A, the vectors 1 2 , ,..., n vv v are the corresponding
eigenvectors (which form a basis for n  ), and 1 2
1 2 , ,..., n t t t
n ee e
λ λ λ vv v are the eigenfunctions
for A, which form a basis for the solution set of x x ′(t At ) = ( ) . Moreover, for any initial
condition x(0) = x0, there exists a unique set of weights C cc c = ( 1 2. , ,..., n ) that defines the
unique solution (2) of the initial value problem (1).
For the case when n=2 and a diagonalizable matrix A does not have a zero eigenvalue, we can
identify the origin either as an attractor, or repeller, or a saddle point for the dynamical
system. From the form of the general solution for n=2,
( ) 1 2
11 2 2
t t t ce c e λ λ xv v = + ,
it follows that:
I. The origin is a repeller if both eigenvalues are positive; the direction of the greatest
repulsion is defined by an eigenvector corresponding to the largest eigenvalue.
II. The origin is an attractor if both eigenvalues are negative; the direction of the greatest
attraction is defined by an eigenvector corresponding to the eigenvalue with the largest
magnitude.
III. The origin is a saddle point if one eigenvalue is positive and the other eigenvalue is
negative; the direction of the greatest attraction is defined by an eigenvector corresponding to
the negative eigenvalue and the direction of the greatest repulsion is defined by an eigenvector
corresponding to the positive eigenvalue.
(For more details, please read Section 5.7 of the Textbook.)
**Create a function in MATLAB that begins with the following lines:
function []=invalprob(A,x0)
format
[~,n]=size(A);
It takes as inputs an n n × matrix A and an n×1 initial vector x0.
19
You will begin writing the code for invalprob by modifying the code for the function eigen,
which was created in Exercise 2 of this project. The details how the function eigen should be
modified are listed below – please remove the tasks and suppress the outputs for the function
eigen that are not on the list. You will also need to replace the command null( ) with
null( ,'r') to create “rational” bases for the eigenspaces.
Construct your function invalprob as follows:
**Include all commands of the function eigen needed to check if A is diagonalizable, but do
not display any outputs. If A is not diagonalizable, output a message:
disp('A is not diagonalizable')
and terminate the program.
**If A is diagonalizable, output a message:
disp('A is diagonalizable')
and your function invalprob will continue with the following tasks:
**Display with a message as indicated below the row vector L of all real eigenvalues of A, in
which the multiple eigenvalues are all equal to each other, each eigenvalue repeats as many
times as its multiplicity, and the eigenvalues are written in the ascending order:
fprintf('all eigenvalues of A are\n')
display(L)
**Display with a message a matrix of the corresponding eigenvectors 1 2 [ ... ] P = n vv v ,
whose columns form an eigenvector basis for R^n:
fprintf('an eigenvector basis for R^%i\n',n)
display(P)
This is the end of the part of your function invalprob that deals from the function eigen .
**Continue your function invalprob as follows. Type:
syms t
and output and display a symbolic matrix 1 2
1 2 [ ... ] n t t t Ee e en
λ λ λ = vv v , whose columns form
an eigenfunction basis for the solution set – supply it with the corresponding message.
Hint: to output E, you can create a symbolic diagonal matrix, whose nonzero entries are the
exponents of the entries of the vector L scaled by t, and, then, use the property that the right
multiplication of the matrix P by a diagonal matrix scales the column of P by the
corresponding entries on the main diagonal of the diagonal matrix.
**Output and display the unique row vector of the weights C cc c = ( 1 2. , ,..., n ) by solving the
linear system x(0) = x0 (you will obtain this system from (2) by substituting a 0 for t). Display
C with a message that its entries are the weights in the representation of the solution x through
the eigenfunction basis E.
**When n = 2 and the matrix A does not have a zero eigenvalue (please use the command
rank to check the second condition), your code will continue with the following tasks:
Determine whether the origin is an attractor, a repeller, or a saddle point of the dynamical
system and display a corresponding message for each of the three cases. Also, output and
display the direction of the greatest attraction (for an attractor), the direction of the greatest
repulsion (for a repeller), and both the direction of the greatest attraction and the direction of
greatest repulsion (for a saddle point) - each output has to be supplied with a corresponding
message.
20
Important Note: please read the Theory above on the definitions of an attractor, a repeller, and
a saddle point for continuous dynamical systems – they are different from the ones for discrete
dynamical systems presented in Lecture 26.
**Print the function invalprob in your Live Script.
**Run the function invalprob(A,x0) as below:
%(a)
A=ones(2), x0=[1;2]
invalprob(A,x0)
%(b)
A=[-2 -5;1 4], x0=[2;3]
invalprob(A,x0)
%(c)
A=[7 -1;3 3], x0=[-2;1]
invalprob(A,x0)
%(d)
A=[-1.5 .5; 1 -1], x0=[5;4]
invalprob(A,x0)
%(e)
A=[3 3;0 3], x0=[1;3]
invalprob(A,x0)
%(f)
A=diag([1,2,2,3,3,3]), x0=ones(6,1)
invalprob(A,x0)
YOU ARE DONE WITH THE PROJECTS!!!

站长地图