代写EMET4314/8014 Advanced Econometrics I Semester 1, 2025 Assignment 7代做留学生Matlab程序

- 首页 >> C/C++编程

Advanced Econometrics I

EMET4314/8014

Semester 1, 2025

Assignment 7

(due: Tuesday week 8, 11:00am)

Exercises

Provide transparent derivations. Justify steps that are not obvious. Use self sufficient proofs. Make reasonable assumptions where necessary.

The linear model under endogeneity is

Y = Xβ + e

X = Zπ + v

where E(eiXi) ≠ 0 and E(eiZi) = 0. Notice dim X = N × K, dim β = K × 1, dim Z = N × L, dim π = L × K, and dim v = N × K.

The source of the endogeneity is correlation between the two error terms, write

e = vρ + w

where E(viwi) = 0. Notice dim ρ = K × 1, and dim w = N × 1.

Combining, we obtain

Y = Xβ + vρ + w                           (1)

(i) You have available a random sample (Xi , Yi , vi). You are running a regression of Y on X and v. Using linear algebra, define the OLS estimator of β in equation (1). Call it .

(Hint: Use the partitioned regression result on the next page.)

(ii) Prove that  = β + op(1).

(iii) You do NOT have available a random sample (Xi , Yi , vi). Instead, you have available a random sample (Xi , Yi , Zi). You cannot run a regression of Y on X and v, but you can instead run a regression of Y on X and ˆv where ˆv is the first stage residual.

Using ˆv in place of v in equation (1), define the OLS estimator of β using linear alge-bra. Call it .

Prove or disprove:  = (X′PZX) −1X′PZY .

(iv) Which estimator do you prefer: or ? No need to prove anything here, just give a quick intuitive statement.

Partitioned Regression and Frisch-Waugh-Lovell Theorem

Partition the linear regression model like so:

Y = Xβ + e

   = X1β1 + X2β2 + e

where X1 is of dimension N × K1 and X2 is of dimension N × K2 with K1 + K2 = K and X = [X1 X2]. Then how could you estimate β1? Write down the normal equations

Solving first for 

Similarly

This has an interesting interpretation:

The OLS estimator  results from regressing Y on X2 adjusted for X1. This ad-justment is crucial, obviously it wouldn’t be quite right to claim that results from regressing X2 on Y only. That would only be true of  = 0 which means that the sample covariance between the two sets of regressors is zero. Now, doing the math by plugging into and letting  and M2 = I − P2:

Multiplying both sides by  and moving terms

The end result (and also symmetrically for ):

Remember that M1 and M2 are residual maker matrices:

At the same time M1 and M2 are symmetric and idempotent

(that is )

There’s a lot of intuition included here. This harks back all the way to Gram Schmidt orthogonalization. To obtain , you regress a version of Y on a version of X1. These versions are  and . These are the versions of Y and X1 in which the influence of X2 has been removed, or partialled out or netted out. If X1 and X2 have zero sample covariance then = Y and = X1 and we only need to regress Y on X1 to obtain .




站长地图