Please enable JavaScript.
Coggle requires JavaScript to display documents.
NEURAL NETWORKS (Optimization functions (Momentum, SGD, RMSProp, NAG,…
NEURAL NETWORKS
Optimization functions
Momentum
SGD
RMSProp
NAG
Gradient Descent
Adagrad
Adadelta
Activation functions
ReLu
Tanh
Sigmoid
Softmax
Training optimization
Using a test set to evaluate the model
Regularization techniques
Dropout
L1
L2
Data augmentation (in CNNs)
Early Stopping
Learning rate decay
Problems
Local Minima (solutions)
Random restarts
Momentum
Vanishing Gradient (solutions)
Change the activation function
Stochastic Gradient Descent (SGD)
Architectures
CNN
RNN
MLP
Error/Loss functions
Log loss
Cross Entropy
Categorical (Multi class)
Binary (=log loss)
Maximum Likelihood
Layers
Input layer
Hidden Layer
Output Layer
Training algorithms
Backpropagation
Forward propagation