Please enable JavaScript.
Coggle requires JavaScript to display documents.
machine learning (types of output neuron (perceptron (step/binary fn),…
machine learning
types of output neuron
perceptron (step/binary fn)
sigmoid/logistic
softmax (a vector of values sum to 1)
rectified linear units
maxout
tanh fn
types of cost functions
quadratic cost fn (mean squared error)
cross-entropy cost fn (used w sigmoid)
log likelihood cost fn (used w softmax)
hinge loss (used w SVM)
learning algorithms
gradient descent
stochastic gradient descent
Hessian techniques
momentum-based gradient descent
prevent overfitting
L1 regularization
L2 regularization
dropout
artificially expanding training data
catastrophic forgetting in sequential learning
single learning agent
the preserve+grow strategy (w/o access to past data)
the principle of rehearsal
multi learning agents
the distillation method
online RL DQN
experience replay
optimization (params updates)
gradient descent w learning rate (first order method)
batch gradient descent
stochastic gradient descent
mini-batch gradient descent
SGD + momentum
adaptive learning rate methods (Adagrad, RMSprop, Adam)
curriculum learning
backpropagation (compute gradient with chain rule)
Newton's method w Hessian matrix (second order method)
representation
score function f=Wx
types of output neuron
perceptron (step/binary fn)
sigmoid/logistic
softmax (a vector of values sum to 1)
rectified linear units
maxout
tanh fn
evaluation
loss functions
data loss
types of cost functions
quadratic cost fn (mean squared error)
cross-entropy cost fn (used w sigmoid)
log likelihood cost fn (used w softmax)
hinge loss (used w SVM)
regularization loss
L1 regularization
L2 regularization
dropout