辅导MAT2040留学生、讲解Linear Algebra、Python编程语言调试、Python辅导
- 首页 >> 其他 Regression
MAT2040 Linear Algebra (2019 Fall)
Project 2
Project Instructions:
• Read the following text and answer the questions given in and after the text.
• For questions that need Julia, both codes and results should be reported. Besides, codes
need in .jl form or .ipynb form.
1 Linear Regression
The simplest relationship is that Y is linear in X. i.e.
Normally, we will collect m data pairs (xi, yi), and try to solve the w and b. By considering
as the white Gaussian noise, we can get formulate the problem as the following:
(1)
Question 1. 1. Based on the lecture “Least Square”, for (1), what is the solution ofθ?
2. Given data pairs as following:
what is the solution of θ for (1)?
1
2 Nonlinearity with Polynomial
For some Y which is not linear in X, we have to introduce some nonlinear terms. In this section,
we will add nonlinearity with kth order polynomial functions. Suppose X ∈ R, when we assume
Y = w1X + w2X2 + · · · + wkXk + b, is on longer linear in x. Similarly,
(2)
Question 2. We provide a file named “2000points.csv”, which contains 2000 pairs of (xi, yi),where yi = x3i − 3x2i + xi + 1 + i and i is a white Gaussian noise.
1. Take k = 3. Solve problem (2).
Question 3. We provide a file named “20points.csv”, which contains 20 pairs of (xi, yi), where
(3)
. It has the same form as the problem (1) and (2), but with different A.
3 Regularization
In section 2 when we have large enough k, we can always make minθ kAθ − yk22 = 0. However,
we will get a model which is far away form the original model, like what we can see in Question
3. we called this phenomenon as overfitting. To overcome overfitting problem, we introduce a
regularization term. And the problem becomes as the following:
minθkAθ − yk22 + λkθk22, for some λ > 0 (4)
The solution of (4) is θ = (AT A + λI)−1AT y
Question 5. Show that if λ > 0, then for any matrix A, AT A + λI is invertible.
Question 6. We provide a file named “20points.csv”, which contains 20 pairs of (xi,yi), where yi = x3i − 3x2i + xi + 1 + i and i
is a white Gaussian noise.
Take k = 10 and λ = 0.01. Solve problem (4).
Question 7. (bonus) In the two files “AirQualityUCI test.xlsx” and “AirQualityUCI train.xlsx”
we provided in the Blackboard, you can find there are 11 columns in the “.xlsx” files which means
there are 9 observed values for each time instant. Our goal is to use values in the last 8 columns
to predict the “hourly average concentration of CO” in the 3rd column.
You should use “AirQualityUCI train.xlsx” to obtain θ, and then use θ and the last 8 columns
in “AirQualityUCI test.xlsx” to predict their corresponding “hourly average concentration of
CO”, and compare them to their true values by taking the difference between them. Then we
define the error as the sum of absolute value of those difference.
3
1. Use linear regression in section 1, and give the result of error.
2. Add all second order terms as it mentioned in Section 2, calculate the error.
3. Add a regularization term with λ = 0.01, calculate the error.
4. Try different settings(e.g use different orders of polynomials, use different value for λ) and
get a smaller error.
4
MAT2040 Linear Algebra (2019 Fall)
Project 2
Project Instructions:
• Read the following text and answer the questions given in and after the text.
• For questions that need Julia, both codes and results should be reported. Besides, codes
need in .jl form or .ipynb form.
1 Linear Regression
The simplest relationship is that Y is linear in X. i.e.
Normally, we will collect m data pairs (xi, yi), and try to solve the w and b. By considering
as the white Gaussian noise, we can get formulate the problem as the following:
(1)
Question 1. 1. Based on the lecture “Least Square”, for (1), what is the solution ofθ?
2. Given data pairs as following:
what is the solution of θ for (1)?
1
2 Nonlinearity with Polynomial
For some Y which is not linear in X, we have to introduce some nonlinear terms. In this section,
we will add nonlinearity with kth order polynomial functions. Suppose X ∈ R, when we assume
Y = w1X + w2X2 + · · · + wkXk + b, is on longer linear in x. Similarly,
(2)
Question 2. We provide a file named “2000points.csv”, which contains 2000 pairs of (xi, yi),where yi = x3i − 3x2i + xi + 1 + i and i is a white Gaussian noise.
1. Take k = 3. Solve problem (2).
Question 3. We provide a file named “20points.csv”, which contains 20 pairs of (xi, yi), where
(3)
. It has the same form as the problem (1) and (2), but with different A.
3 Regularization
In section 2 when we have large enough k, we can always make minθ kAθ − yk22 = 0. However,
we will get a model which is far away form the original model, like what we can see in Question
3. we called this phenomenon as overfitting. To overcome overfitting problem, we introduce a
regularization term. And the problem becomes as the following:
minθkAθ − yk22 + λkθk22, for some λ > 0 (4)
The solution of (4) is θ = (AT A + λI)−1AT y
Question 5. Show that if λ > 0, then for any matrix A, AT A + λI is invertible.
Question 6. We provide a file named “20points.csv”, which contains 20 pairs of (xi,yi), where yi = x3i − 3x2i + xi + 1 + i and i
is a white Gaussian noise.
Take k = 10 and λ = 0.01. Solve problem (4).
Question 7. (bonus) In the two files “AirQualityUCI test.xlsx” and “AirQualityUCI train.xlsx”
we provided in the Blackboard, you can find there are 11 columns in the “.xlsx” files which means
there are 9 observed values for each time instant. Our goal is to use values in the last 8 columns
to predict the “hourly average concentration of CO” in the 3rd column.
You should use “AirQualityUCI train.xlsx” to obtain θ, and then use θ and the last 8 columns
in “AirQualityUCI test.xlsx” to predict their corresponding “hourly average concentration of
CO”, and compare them to their true values by taking the difference between them. Then we
define the error as the sum of absolute value of those difference.
3
1. Use linear regression in section 1, and give the result of error.
2. Add all second order terms as it mentioned in Section 2, calculate the error.
3. Add a regularization term with λ = 0.01, calculate the error.
4. Try different settings(e.g use different orders of polynomials, use different value for λ) and
get a smaller error.
4