-
Notifications
You must be signed in to change notification settings - Fork 2
Regularization
SVM Assume weight is c All your weights when you sum up should be less than c
Giving it some budget for it to go outside a margin
Regression: The error function is a quadratic or the parabolic bowl. We need to find the lowest point to in the parabola to find the best beta. In the test data, this might not be perfect. So we add a penalty so we get a more regularized beta.
c = 1/lambda Error function : 1/2 sum{(f(x) - y)^2} Error function with penalty : 1/2 sum{(f(x) - y)^2} + lambda beta^2
There's another kind of regularization: l1 regularization (l1 better than l2) It draws a diamond and enters through one of two points and makes one of the Betas 0
Lasso (Least Absolute Selection and Shrinkage Operator ) Shrinkage makes the Beta smaller. Lasso has additional effect by making some of your features 0.
Elastic net is a mix of lasso and ridge.
2018-W-450-4/06-hyper-parameter-tuning/ipynb/01-Logistic-Regression-Tuning.ipynb