Please enable JavaScript.
Coggle requires JavaScript to display documents.
12b: Deep Neural Nets - Coggle Diagram
12b: Deep Neural Nets
o que eu aprendi
-
-
Final layer of neurons
a medida que uma função Sigmoide é treinada por um X ou T (treashold) , adapta a função sigmoide para classificar resultados positivos e negativos, maximizando a chance de classificar exemplos de forma adequada
Softmax
em vez do output ser uma valor máximo, o output é uma matriz das categorias mais prováveis
Dropout
as redes neurais podem ser bloqueadas em área máximas, para bloquear isso ela desativa neurônios para verificar se o seu comportamento está atrapalhando o output, a cada computação outro neurônio é desligado ou retirado para checar todos os neurônios
Pega uma imagem, faz uma convolution, depois de pegar uma parte dela, ela faz o pooling que seria pegar cada conjunto de pontos e pegar o valor maior. E isso é feito um monte de vezes até chegar no:
-
-
-
-
Neural Nets
-
Linear in the depths of the neural net, not exponential
o caminho é o mesmo, então ele traça um caminho de um neuronio para o outro
Autocoding
A small number of neurons (~2), the “hidden layer“, a bottleneck of neurons between two columns of multiple neurons (~10) is used to obtain output values z[n] that are the same as input values x[n].
Such results implies that a form a generalization is accomplished by the hidden layer, or rather, a form of encoded generalization, as the actual parameters of the bottleneck of neurons seems not so obvious to understand.
convolution
a neuron looks for patterns in a small portion (10×10 px) of an image (256×256 px), the process is repeated by moving this small area little by litte.
pooling
The result of the convolution is computed as a point for each portion analyzed. By a similar step by step process, a small set of points are computed into values by choosing the maximum value (“max pooling”).
Final layer of neurons
As the neural net is trained with parameters and thresholds, the shape and corresponding equation of the sigmoid function is adapted to properly sort positive and negative results, by maximizing the probability of sorting examples properly.
Softmax
Instead of sorting by the maximum value and the corresponding category, the final output is an array of the most probable categories (~5 categories).
Dropout
The problems of neural nets is that they can get blocked in local maximum areas. To prevent this, at each computation, one neuron is deactivated to check if its behavior is skewing the neural net. At each new computation another is shut down, or dropped out, to check all neurons.
Thanks to wider neural networks, neural nets can avoid being jammed into local maximum as they can analyze local maximum through more parameters.
By reproducing the pooling process multiple times (100x), and feeding it to a neural net, it will compute how likely the initial image is recognized as a known category.