Please enable JavaScript.
Coggle requires JavaScript to display documents.
Linear Algebra for ML, Probability for ML, Calculus for ML, Chapter 1
…
-
Probability for ML
-
Week 1
Bayes Theorem
e.g.
-
Calculus for ML
Newton's Method
-
-
-
-
-
vs.Gradient descent
Newton's method
- converge faster (pros)
- less parameter to control (pros)
- computationally expensive to calculate second derivative for large number of parameters (cons)
Gradient descent
- converge slower (cons)
- extra parameter, learning_rate (cons)
- computationally not expensive (pros)
-