代写IS5126 Hands-On with Applied Analytics 2024 Homework 2代写留学生Python程序
- 首页 >> C/C++编程IS5126 Hands-On with Applied Analytics (Jan-May 2024)
Homework 2
Handed out: 11 Mar 2024 Due: 31 Mar 2024 (11:59 PM)
You are to work on this homework individually. Please do NOT post your code for this and other course assignments publicly online (e.g., on GitHub or Google Drive) to avoid unwittingly (or wilfully) facilitating plagiarism (see Week 1’s handout “wk1 admin.pdf ” for the consequences of cheating).
For your submission, clearly indicate your name and ID, compress your code and answers into a single zipped ile, and submit it on Canvas. Please ensure that your code actually runs; the TA cannot give you any points otherwise. If a piece of your code is not working, clearly state so in your submission.
Download MachineLearningCourse.zip from Piazza and unzip it into a folder (preferably a diferent folder from the one you used for your guided project so as to avoid overwriting your previous iles). For the ease of exposition, I shall assume that the zipped lie is uncom- pressed into the folder mydir and MachineLearningCourse/ is the only item in it. I suggest you spend some time exploring the contents of mydir/MachineLearningCourse/ to familiar- ize yourself with the ile structure of the project (it is largely the same as that of guided project). You can peruse the Blink data (originally from a Kaggle competition) in the folder mydir/MachineLearningCourse/MLProjectSupport/Blink/dataset/. (Here, I assume you are using a Mac or Linux machine that uses the forward slash / as the ile separator. If you have a Windows machine, please use the backslash / as the ile separator instead.)
You need to install the Python packages Pillow (for reading and creating of images), matplotlib (for plotting graphs), and joblib. (You should have already installed these for the guided project.) Finally, please note that you are ultimately responsible for setting up your own Python environment.
Question 1. (6 points)
For this question, you have to apply the AdaBoost algorithm to the Blink dataset.
In Assignments/Module03/BlinkFeaturize.py, look through the BlinkFeaturize class, es- pecially its CreateFeatureSet method. When that method’s includeAssignmentFeatures parameter is set to True, the FeaturizeX method applies a Sobel gradient ilter (described in Geof’s computer vision lecture slides) on a 24 × 24 image in the Blink dataset to create a 3 ×3 grid. Then it calculates the maximum and average gradient values (2 features) in both horizontal and vertical directions, giving 2 × 2 = 4 features per grid region. For the 9 grid regions, the method returns a total of 9 × 4 = 36 features per image. These features will be those used by the AdaBoost algorithm. Similarly, when the includeEdgeFeatures parameter is set to True, the FeaturizeX method calculates the average Sobel gradient in the horizontal and vertical directions for the entire image, returning 2 features per image. (At the bottom of BlinkFeaturize.py, please (un)comment the relevant code depending on whether you want to use the joblib library for parallelization.)
(Why is the parameter includeAssignmentFeatures named as such? You may ask. Well, that is because we initially wanted you to write the code corresponding to that parameter for this assignment, but decided to give you the code instead to reduce your homework load.)
In MLUtilities/Learners, scrutinize the code for BoostedTrees.py that implements the Ad- aBoost algorithm using decision trees as base learners (see DecisionTreeWeighted.py). Like the LogisticRegression model you have implemented in the guided project, the two main methods of BoostedTrees are fit and predict. Note that you do NOT need to write the code for AdaBoost or for decision trees. The code has already been implemented for you; you just need to read the code to understand how to use it correctly.
In Assignments/Module03/Framework-1-Blink.py, you have to add your code to complete this question. (We cannot hold your hand any more than this; otherwise, we might as well do this homework question for you.)
(a) (2 points) Set includeEdgeFeatures to True as below (already done so in Framework-1-Blink.py).
featurizer = BlinkFeaturize.BlinkFeaturize()
featurizer.CreateFeatureSet(xTrainRaw, yTrain, includeEdgeFeatures=True)
Train BoostedTrees on the Blink dataset. Via the maxDepth parameter of the BoostedTrees.fit method, vary the maximum depth of the (weighted) decision trees used by BoostedTrees from 0 to 10. For each maximum depth, determine BoostedTrees’s accuracies on the training set and validation set. On the same graph, plot the training accuracy and validation accuracy (y-axis) versus the maximum depth (x-axis). Clearly label the two curves corresponding to the training accuracy and validation accuracy and submit the graph.
To help you plot the graph, you may consider using the PlotSeries method in MLUtilities/Visualizations/Charting.py (you need to specify where to output your plots via the parameter outputDirectory).
(b) (2 points) Now set includeAssignmentFeatures to True as shown below and repeat the process in (a).
featurizer = BlinkFeaturize.BlinkFeaturize()
featurizer.CreateFeatureSet(xTrainRaw, yTrain, includeAssignmentFeatures=True)
Train BoostedTrees on the Blink dataset. For each maximum depth from 0 to 10, deter- mine BoostedTrees’s accuracies on the training set and validation set. Then plot on the same graph the training accuracy and validation accuracy (y-axis) versus the maximum depth (x-axis). Clearly label the two curves corresponding to the training accuracy and validation accuracy. Compare the graph with that in (a). What can you conclude from the comparison? Submit the graph and your conclusion from the comparison.
(c) (2 points) Pick the best boosted model in (a) and the best one in (b), and plot their ROC curves on the same graph. (You may wish to use the TabulateModelPerformanceForROC method introduced in the guided project to help you out with the ROC curves. To help you plot the graph, you may consider using the PlotROCs method in MLUtilities/Visualizations/Charting.py) Submit the graph.
Also submit your code for Framework-1-Blink.py (and other supporting code that you may have) that is used to answer (a)-(c) above. Clearly indicate the portions of your code that are responsible for each part.
Question 2. (14 points)
In this question, you will implement a neural network and apply it on the Blink dataset. Exam- ine the code in Framework-2-NeuralNetwork.py and MLUtilities/Learners/NeuralNetworkFullyConnected.py. You have add your code to these iles.
The features that your neural network will work on are the pixel intensities of an image. To get these features, use the code below (already in Framework-2-NeuralNetwork.py).
featurizer = BlinkFeaturize.BlinkFeaturize()
sampleStride = 2
featurizer.CreateFeatureSet(xTrainRaw, yTrain, includeIntensities=True, intensitiesSampleStride=sampleStride)
The intensity value at each pixel is converted to a value between 0 and 1, and a sampleStride of 2 downsamples the image by picking every other pixel. Since each image in the Blink dataset has size 24×24, you end up with 12 × 12=144 intensity values as features.
You have to implement the training (or the itting to data) of a fully connected neural network with an input layer, N hidden layers of variable sizes, and an output layer with a single output. Your implementation has to use the backpropagation algorithm as described in the Mitchell textbook. In your implementation, please use stochastic gradient descent (as described in a Piazza handout in Week 2), in which you update the network weights based on the error from one example (rather than the cumulative errors of all examples). (Hint: In your code, have one structure for all the weights in the network, a parallel one for activations, and a parallel one for errors. This may make your life easier. You are of course free to implement whichever way you choose.) Your code for training should roughly do the following.
For each epoch
For each training example
Pass the example through the network to get the activations
Propagate the error from the output layer back through the network Update all the weights
In addition, your implementation has to do the following.
. All the activation functions in the neural network are to be sigmoid functions.
. Support incremental stochastic gradient descent, i.e., the ability to run an epoch, pause, then run another epoch, and so on.
. Support hyperparameters of:
– stepSize: the size of the weight update to do with each step of stochastic gradient descent.
– convergence: indicates the minimum loss improvement per epoch in the training set loss before convergence.
– hiddenLayersNodeCounts: a list with one int per hidden layer indicating the num- ber of nodes in that layer, e.g., a network with two hidden layers with 10 nodes in the irst layer and 5 nodes in the second would have hiddenLayersNodeCounts=[10,5].
– A single output with MSE loss as the loss function
. For initialization, set all the network’s parameters to small random initial values using the following.
stdv = 1.0 / math.sqrt(inputsToThisLayer+1)
layer.append([ random.uniform(-stdv, stdv) for inputID in
range(inputsToThisLayer+1) ])
Check that your neural network model works by training and testing on the Blink data set.
The deliverables for this question are as follows.
(a) (6 points) Your neural network implementation (Framework-2-NeuralNetwork.py and NeuralNetworkFullyConnected.py). Make it easy for the TA to ind the following key components.
. Forward propagation.
. Backward propagation.
. Weight updates.
. Loss calculation .
In NeuralNetworkFullyConnected.py, we have indicated the places to add your code with the word Stub. If you have other supporting code, please submit it too.
(b) (2 points) Train a single-layer neural network with 10 nodes in its hidden layer on the Blink dataset. Use a stepSize of 0.1 and a convergence of 0.01.
. Create a graph with training and validation set losses on the y-axis versus epoch numbers on the x-axis.
. Create a graph with training and validation accuracies on the y-axis versus epoch numbers on the x-axis.
(c) (2 points) Train a double-layer neural network with 20 nodes in its irst hidden layer and 4 nodes in its second hidden layer on the Blink dataset. Use a stepSize of 0.1 and a convergence of 0.0001.
. Create a graph with training and validation set losses on the y-axis versus epoch numbers on the x-axis.
. Create a graph with training and validation accuracies on the y-axis versus epoch numbers on the x-axis.
(d) (2 points) Tune the parameters of a single-layer neural network until its validation set accuracy is greater than 90%. (If you cannot hit that validation accuracy, then do the best you can. Clearly indicate the best accuracy you achieved.)
. Report the parameters that you used.
. Create a graph with training and validation set losses on the y-axis versus epoch numbers on the x-axis.
. Create a graph with training and validation accuracies on the y-axis versus epoch numbers on the x-axis.
. In 1-2 sentences, indicate the approach you took to inding your hyperparameters that work.
. Include a brief log of the hyperparameters you tried.
(e) (2 points) Tune the parameters of a two-layer network until its validation set accuracy isgreater than 92%. (If you cannot hit that validation accuracy, then do the best you can. Clearly indicate the best accuracy you achieved.)
. Report the parameters you used.
. Create a graph with training and validation set losses on the y-axis versus epoch numbers on the x-axis.
. Create a graph with training and validation accuracies on the y-axis versus epoch numbers on the x-axis.
. In 1-2 sentences, indicate the approach you took to inding your hyperparameters that work.
. Include a brief log of the hyperparameters you tried.