Please enable JavaScript.
Coggle requires JavaScript to display documents.
Back Propogation Neural Networks by david rumelhart,1986 (Backpropagation,…
Back Propogation Neural Networks by david rumelhart,1986
-
-
Proposed by
an American psychologist who made many contributions to the formal analysis of human cognition, working primarily within the frameworks of mathematical psychology, symbolic artificial intelligence, and parallel distributed processing
backward propagation of errors or backpropagation, is a common method of training artificial neural networks and used in conjunction with an optimization method such as gradient descent. The algorithm repeats a two phase cycle, propagation and weight update.
-
Loss function
-
Sometimes referred to as the cost function or error function (not to be confused with the Gauss error function), the loss function is a function that maps values of one or more variables onto a real number intuitively representing some "cost" associated with the event. For backpropagation, the loss function calculates the difference between the input training example and its expected output, after the example has been propagated through the network.