I need support with this Computer Science question so I can learn better.

Please attempt all the questions.

Identify all questions that you attempted in this template

Q1 Textbook Examples

Q2 Textbook Theory

2. We have seen that in p = 2 dimensions, a linear decision boundary takes the form β0+β1X1+β2X2 = 0. We now investigate a non-linear decision boundary.

(a) Sketch the curve (1 + X1)2 + (2 − X2)2 = 4.

(b) On your sketch, indicate the set of points for which (1 + X1) 2 + (2 − X2) 2 > 4, as well as the set of points for which (1 + X1)2 + (2 − X2)2 ≤ 4.

(c) Suppose that a classifier assigns an observation to the blue class if (1 + X1)2 + (2 − X2)2 > 4, and to the red class otherwise. To what class is the observation (0, 0) classified? (−1, 1)? (2, 2)? (3, 8)?

(d) Argue that while the decision boundary in (c) is not linear in terms of X1 and X2, it is linear in terms of X1, X12, X2, and X22.

Q3 Textbook Applied

7. Use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.

(a) Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.

(b) Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results.

(c) Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results.

(d) Make some plots to back up your assertions in (b) and (c).

Hint use: https://botlnec.github.io/islp/

https://github.com/a-martyn/ISL-python

Q4 Titanic Dataset Apply SVM to the Titanic Dataset and compare to Random Forest

Hint: https://www.kaggle.com/l3r4nd/titanic-prediction-with-svm

HW Support Vector Machine 2.docx
Auto.csv