Please enable JavaScript.
Coggle requires JavaScript to display documents.
Neural Network (Activation Functions (Relu (No vanishing gradient, Only…
Neural Network
Activation Functions
Tanh
Better than sigmoid
Sofrmax
Usefull for output
Relu
No vanishing gradient
Only for hidden
Leaky Relu for dying neurons problem
Sigmoid
Slow Convergence
Structure
Loss Functions
Classification = Per Class
Regression -> output is continious value
Convolution NN
Convolution
Pooling (Downsampling)
Maxpooling (Take max of square)
ImageNet might be good source