Please enable JavaScript.
Coggle requires JavaScript to display documents.
Ensemble Methods (Adaboost) (Book (Intro (Performance (Parallel simple…
Ensemble Methods (Adaboost)
Book
Intro
Groups > Individual
Performance
Parallel simple learnersUs
Simple Learners only slightly better than chance
Voting Multiple Classifiers
Use multiple different models
Unlikely to make same mistake
Similar bias = similar error
Reduces variance
Uses
Bagging
(replacement)
Alternative to regularization
Adaboost
Adaptive Boosting Algorithm
Concept
Studying for a midterm analogy
Down weight answers you get right
Up weight answers you get wrong
Boosting Weak Learners
Framework
Lecture
Math
Component
Weak learner h(x)
weight/contribution B
Exponential Loss
\(e^{y_n a(x_n)}\)
Classifier
a(x)
Prior + new weak learner and contribution
\(a(x) = \underbrace{a_{t - 1}}_{\text {previous classifier}} \overbrace{B_t h_t(x)}^{\text{Weak learner with weight}} \)
Equation
Minimizing exponential loss
Two cases
Right
Wrong
Algorithm
Best weak learner
Best contribution
Greedy
Adaboost
Weights \(\propto\) Error
Bagging
Terms
Our entire dataset with N examples = D
We split into B data sets
\(D_1,D_2,...,D_B\)
Each \(D_b\) has N examples
randomly drawn from D
With
replacement
Minimizing Algorithm
exponential loss
Two components
Sum
Two components
Weak Learners wrong
Weak Learners correct
Parameters
Contribution B
Weak learner h(x)
Computer Algorithm
Find weak learner
Complexity
Try all
Sort first