You will in this exercise examine the differences between LDA and QDA.
Suppose that we take a data set, divide it into equally-sized training and test sets, and then try out two different classification procedures. First, we use logistic regression and get an error rate of 20% on the training data and 30% on the test data. Next, we use 1-nearest neighbors (i.e. K = 1) and get an average error rate (averaged over both test and training data set) of 18%. Based on these results, which method should we prefer to use for classification of new observations? Why?
This exercise should be answered using the Weekly
data set, which is part of the LSLR
package. If you don’t have it installed already you can install it with
install.packages("ISLR")
To load the data set run the following code
This data is similar in nature to the Smarket
data from chapter 4’s lab, it contains 1089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.
Produce some numerical and graphical summaries of the data. Does there appear to be any patterns?
Use the whole data set to perform a logistic regression (with logistic_reg()
) with Direction
as the response and the five lag variables plus Volume
as predictors. Use the summary()
(remember to do summary(model_fit$fit)
) function to print the results. Do any of the predictors appear to be statistically significant? if so, which ones?
Use conf_int()
and accuracy()
from yardstick
package to calculate the confusion matrix and the accuracy (overall fraction of correct predictions). Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.
Split the data into a training and testing data set using the following code
Now fit the logistic regression model using the training data, with Lag2
as the only predictor. Compute the confusion matrix and accuracy metric using the testing data set.
Repeat (e) using LDA.
Repeat (e) using QDA.
Repeat (e) using KNN with K = 1
.
Which of these methods appear to provide the best results on the data?
(Optional) Experiment with different combinations of predictors for each of the methods. Report the variables, methods, and associated confusion matrix that appears to provide the best results on the held-out data. Note that you can also experiment with different values of K in KNN. (This kind of running many many models and testing on the testing data set many times is not good practice. We will look at ways in later weeks on how we can properly explore multiple models.)