Please enable JavaScript.
Coggle requires JavaScript to display documents.
Network topologies, Recurrent network
Recurrence is defined as the…
-
Recurrent network
Recurrence is defined as the process of a neuron influencing itself by any means or by any connection. Recurrent networks do not always have explicitly defined input or output neurons.
Direct recurrence. Feedforward network is expanded by connecting a neuron j to itself, with the weights of these connections being referred to as \( w_{jj} \).
As a result, neurons inhibit and therefore strengthen themselves in order to reach their activation limits.
Indirect recurrence. Network is based on a feedforward network, with additional connections between neurons and their preceding layer being allowed.
A laterally recurrent network permits connections within one layer.
Each neuron often inhibits the other neurons of the layer and strengthens itself. As a result only the strongest neuron becomes active (winner-takes-all scheme).
Feedforward network
- One input layer, one output layer and one or more processing layers which are invisible from the outside (also called hidden layers)
- Connections are only permitted to neurons of the following layer
Completely linked layers are layers every neuron of which are connected to all neurons of the previous layer
-
Completely linked network
- Permits connections between all neurons, except for direct recurrences
- Connections must be symmetric
Every neuron is always allowed to be connected to every other neuron – but as a result every neuron can become an input neuron. Clearly defined layers do not longer exist.
In many network paradigms neurons have a threshold value that indicates when a neuron becomes active. Thus, the threshold value is an activation function parameter of a neuron. From the biological point of view this sounds most plausible, but it is complicated to access the activation function at runtime in order to train the threshold value.
Threshold values \(\theta_{j1}, ..., \theta_{jn} \)for neurons \(j_1, ..., j_n \) can also be realized as connecting weight of a continiously firing neuron.
A bias neuron is a neuron whose output value is always 1. It is used to represent neuron biases as connection weights, which enables any weight-training algorithm to train the biases at the same time.
Instead of including the threshold value in the activation function, it
is now included in the propagation function
Two equivalent neural networks, one without bias neuron on the left, one with bias
neuron on the right.