Please enable JavaScript.
Coggle requires JavaScript to display documents.
ML (Methodologies, Taxonomy, What is it?, Hierarchical clustering, Feature…
ML
Methodologies
Sampling
-
Optimisation theory
Error Surface
-
Taxonomy
Supervised Learning
-
Regression
Linear regression is used to predict the value of a continuous variable Y based on one or more input predictor variables X
Evoluation
Root Mean Square Error
\( E_{\text{rms}} = \sqrt{\frac{1}{N}\sum_{i=1}^N (y_i - \hat{y}_i)^2}) \)
where \( y_i \) is the true value and \( \hat{y}_i\) is the predicted value.
-
-
R-Squared
\( E_R = 1 - \frac{\sum e_i^2}{\sum(y_i - \bar{y})^2} \)
where \( \bar y = \frac 1 N \sum y_i \)
Sum of squared Errors
\(E_{SSE} = \frac{e_1^2 + e_2^2 + \cdots + e_N^2} {N} = \frac{1}{N} \sum_{i=1}^N e_i^2\)
\( e = y_i - \hat{y_i} \)
Problem
Linear Solution
Quadratic solution
Cubic solution
5-power solution
-
-
Multiple Regression
-
Problem
Solution
-
-
\(\mathbf{w} = [w_0, w_1, \cdots, w_K]^T\)
-
Degrees of freedom
A linear model \( y = w_0 + w_1x \) has two parameters and is inflexible, as it can only generate straight lines.
A cubic model \(y = w_0 + w_1x + w_2x^2 + w_3x^3 \) has 4 parameters and is more flexible than a linear one.
Under-fitting
Over-fitting
Just Right
-
-
-
-
-