代写ECON 503 (MSPE) Spring 2023 FINAL EXAMINATION代做Python程序
- 首页 >> WebECON 503 (MSPE)
Spring 2023
Department of Economics
FINAL EXAMINATION
1. [15 + 15] (Multiple Regression: Theory)
(i) Consider the regression model Y = Xβ + ε , ε ∼ (0, Iσ2), where X is an n × k non- stochastic matrix.
Show that ˆσ 2 = ˆε ′ˆε/(n − k) is an unbiased estimator of σ 2 , where ˆε = Y − Xβˆ and βˆ = (X′X) −1X′Y . Why do we need to divide by (n − k) instead of n to get an unbiased estimator?
(ii) For the above regression model we can show that
where A = limn→∞ X′EXnwith E = In − 11′n and 1 = (1 1 ... 1)′ . Using (1), show that for the simple regression model y = βx + ε,
where σx(2) is population variance of X.
Now investigate the impact of σ2 , σx(2) and β on the plimn→∞ R2 and interpret your results.
2. [10 + 15 + 5] (All the Vices: Non-normality, Heteroskedasticity and Autocorrelation)
(i) Consider the Jarque-Bera (JB) test for normality i.e., for Ho : εi ∼ IIDN(0,σ2 ) , i = 1, 2, ... n,
What is the theoretical basis for JB? i Hint: Start with dεi/dlogf(εi), under Ho. ]
What do √b1 and b2 stand for? What can you say about the distribution of JB?
(ii) Consider the simple regression model
yi = βxi + εi ,
where εi ∼ (0,σ2 xi(2)), E(εi ,εj ) = 0 for i ≠ j & i,j = 1, 2,...n, and xi(′)s are non-stochastic.
Find the simplified expressions for β(ˆ)OLS = (X′ X)−1X ′ Y and β(ˆ)GLS = (X′ Σ−1X)−1X ′ Σ−1 Y.
Then show that Var(β(ˆ)OLS ) ≥ Var(β(ˆ)GLS ).
(iii) In time series context, for the regression model
yt = xt(′)β + εt ,
we introduced autocorrelation through εt = ρεt−1 + ut , |ρ| < 1 and ut ∼ IID(0,σ2 ). Consider testing Ho : ρ = 0 against Ha : ρ > 0. Explain how the DW statistic
where et = yt − x ′ tβˆ OLS, provide a simple and good test for Ho. Then describe the implementation of the test. [Hint: First, show that DW ≈ 2(1 −ρ)]
3. [5 + 10 + 5 + 5 + 5 + 10] (Multiple Regression: Empiric from your Great Expectations)
A plot (Figure 1) of your midterm expected grade (X) supplied by you, and observed grade (Y) is given below. Here the grades are C −, C, C +, B−, B, B+, A−, A, and A+ corresponding to levels 1, 2, 3, 4, 5, 6, 7, 8, and 9, respectively. The dashed line is the fitted simple regression line described below. The solid line is the 45o line. In the plot ”•” denotes the data point for each (X, Y) pair.
Figure 1: Expected vs. Actual Grade
(i) Write down clearly what observations you can draw from the above plot.
Ignoring the discrete nature of the data we ran an OLS regression of Y on X, and obtain the following results (where the standard errors are given in the parenthesis).
(ii) Test whether the expectations are rational (unbiased). Clearly state your null and alter- native hypotheses.
(iii) Below are density plots (Figure 2) of expected (X) and actual grades (Y).
Interpret the plot.
Figure 2: Density plots of expected (X) and actual grades (Y)
Is it possible to anticipate the estimated regression line from the above plot? Explain.
(iv) Many of the explanatory variables were insignificant in the long regression. Thus we ran a parsimonious (short) regression with the following results (Table 1):
Table 1
Discuss the merits (or demerits) of the above regression and the earlier simple regression.
(v) The DW values in both cases are close to 2.0. That means there is no autocorrelation. Does that means you (or your scores) are uncorrelated?
(vi) Suppose you do want to take account of the dependence among yourselves. Then, what kind of regression you will run? Provide details. [Hint: Use spatial regression.]