Please enable JavaScript.
Coggle requires JavaScript to display documents.
Supervised learning (Decision Tree (Expressness of DTs (Any binary…
Supervised learning
Decision Tree
-
-
-
Dealing with overfitting
Cross Validation, stop early using a test set. Pruning
Bias of ID3
Prefer: less branch, correct classify, shorter trees
Best attributes, (GAN(S,A))
-
Boosting
-
-
Bagging, instead of learning on one sample, split sample to different samples, train different model on each sample and combine them together. Ensemble is good.
-
-
SVM
-
-
-
-
Kernal trick: X^TY -> k(x,y) domain knowledge. Project to higher dimension space
Neural-Nets
-
Perception rule, runs in finite time for linearly separable data sets
-
Restrictions and preference. Small random value, Occam's razer
Bayes
Bayesian learning
Hmap, MAP hypothesis, Maximum a posteriori,最大后验概率
Hml, The maximum likelihood hypothesis, The map that you get when the prior in uniform.
-
-
voting hypothesis, Bayes optimal classifier.
Bayes inference
Bayes networks- Representation of joint distributions using networks to compute probabilities. Sampling as a way to do approximate inference. In general, hard to do exact inference.
Naive Bayes-Link to classification: tractible, gold standard, inference in any direction, missing attributes. Inference in any direction(missing attribute).