Please enable JavaScript.
Coggle requires JavaScript to display documents.
Task 2 Man and the Machine (BCi and connectionism) (Pattern association --…
Task 2 Man and the Machine (BCi and connectionism)
Neural nets / Connectivism
Unit
--> basically same as neuron
--> transference of information / input
Receive excitatory / inhibitory input
if threshold reached fire and excite / inhibit other units / neurons Via AXON / LINK!!
--> strength of activation / output depends of strength of input (also weight of link!!)
Strength of connection / Weight
--> of link determines influence of one Neuron/Unit on an other
Bias Unit:
--> Receives no input in the network
--> activity is always active / +1
--> can have a positive or negative weight on other units
---> weight changes just like any other weight during learning
--> represents base firing rate of neuron/unit in absence of stimulation
if excitatory connection!!
--> represents higher threshold of activation of neuron/unit in absence of stimulation
if inhibitory connection
!!
Link
--> basically same as axon
Excitatory or Inhibitory
One way = From Neuron A to B only
--> asymmetric !!
Both ways = Neuron A to B AND Neuron B to A
---> Symmetric !!!
Learning
Happens by changing the strength / weight of the connection / links between the neurons / UNITS !!
#
#
IF neuron or unit did not fire even though it should have, (kinda like conditioning in purkinje cells hihi :D!) it can be corrected by for example incresing weight / input from excitatiory neurons/units and decreasing weight / input from inhibitory neurons/ units !
OOMG exactly like Conditioning in cerebellum with purkinje cells and error detection in infreior olives and climbing fibers !!
change in weights will go on for as long as there is error in the prediction!!
-->as error of unit activation or not activation reduces so will weight change and it will cease once there is no error anymore !!
This is called
Delta Rule
!!
--> Activity desired - Activity Obtained
--> Ai (desired) - Ai (Obtained)
Representations
Localize
Grandmother cells (hubel & wiesel)
--> theres a neuron that just fires for visual representation of sarah jessica parker lol xD
Limitations:
Distributed
Synchrony
--> each independent unit does not know whether or not its a cat or dog
BUT--> overall pattern of activation of distributed units/neurons in the brain at the same time represent its a cat or a dog / trigger recognition or representation of a cot or dog !!
Advantages:
--> Immune to damage
--> less noise
--> retrieval by content
--> thats pattern association :p !!((((--> explain prototypes (learning and memory))))
--> all things where Localized models of cognitive networks / AI suck in !
#
Limitations:
Whole knowledge representation of a cat =
made up of parts
-->
these Parts = distributed over many units
that are linked together
-->
processing of stimulus cat happens parallel (at same time)
in each these units
-->parallel processing
The decision taht what youre lookign at is a cat = consensus / agreement of all the calculations of all the Units constituting the network "cat"
--> in other words if enough units in network cat = active at same time --> its a cat !!
Advantages in depth:
--> immune to damage: cause average activation of network is what gives decision so dmg to a neuron or two doesn't change anything (very slightly less accurate maybe but thats all)
--> called graceful degredation!!
cause if damage it doesnt fail just accuracy might decrease a bit
feedforward*
= information moves upward (towards output) through the network :3 !!
Pattern association
---> supervised !!!
#
pattern associator is presented with pairs of patterns. If learning is successful then the network will subsequently recall one of the patterns at output when the other is presented at input
After training, a pattern associator can also respond to novel inputs, generalisng from its experience with similar patterns.
Pattern associators are tolerant of noisy input and resistant to internal damage.
They are capable o f extracting the central tendency or prototype from a set of similar examples.
recall patterns dont exaxtly match the learning patten becasue of multiple assocaitions can be stored on teh same weight matrix
--> for a simple matrix liek we are using theres a limit as to how many assocaitions can be stored on one :o !!
limitations:
--> retroactive interference
weights in matrix so strenghten that inout will always lead to output !! same issue as with hebbian leanring :3!!
--> it will JSUT fire for everything :D!!
HIDDEN LAYER INFO FROM THE PRESENTATION ABOUT CONNECTIONISM !!!
can generalise during recall!!
-->That is, if a recall cue is similar to a pattern that has been learnt already, a pattern associator will produce a -Similar response to the new pattern as it would to the old
--->OMMGG SO IMPORTANT !page 61 Mcleoad !!!!!
parallel processing
--> pattern associator performs parallel computation in two senses.
---> AWESOME CONNECTION WITH ACT-R rule based :D !!
Qne is that for a single neuron, the separate contributions of the activity of each axon multiplied by the synaptic weight are computed in parallel and added simultaneously.
The-second is that this is performed in parallel for all neurons in the network.
Competitive networks
---> unsuperwised !!!
----> cause no feedback about correctness neccesarry cause purely based on what fires together wires together :D !!
--> better then hebbian learnign cause weights cant infinitly increaes because as some weights are strenghtened others are weakened :D !!
input pattern is presented to a competitive network the output units compete with each other to determine which has the largest response.
--> unit with largest response = winner unit for that pattern
connections to the winning output unit from the input units which were active in that pattern are strengthened and those from input units ivhich were inactive are weakened.
When this learning algorithm has been
applied to the different winning units following a range of input patterns, the network will come to categorise input patterns into groups, with one output unit firing in response to each.
Competitive learning is unsupervised. There is no external teacher signal which knows in advance what categorisation is appropriate.
The network finds a categorisation for itself based on the similarity between input patterns and the number o f output units available.
competitive network presented with patterns of letters can come to categorise themon the basis of features, individual letter values, letter position or letter combinations as the number of output units is changed.
Weight adjustment happens based on this formula:
---------------->
E * (A -W)
<---------------
---->
E(
learnign rate which is given!)
(
A
(activity of node) -
*W
(weight of connection between nodes BEFORE this activation)
Example (mcleod p130)
activity of = " 0 1 1 "
first unit activity of = 0
--> so connection weakens
--> connection to output unit was 0.2
--> based on formula and thta E is 0.5
--> 0.5 *(0 - 0.2) = - 0.1
--> connection gets reduced by 0.1 --> new connection weight = 0.1
second unit activity of = 1
--> previous weight 0.3
--> formula with E=0.5
--->
E * (A -W)
---_> 0.5 * (1 - 0.3) = 0.35
--> thus weight gets increased by 0.35 = new weight 0.65
--> 1 for A cause activity pattern was "011" if was "022" then A would have been 2 logically !!)
third unit activity of = 1
--> previous weight 0.3
--> formula with E=0.5
--->
E * (A -W)
---_> 0.5 * (1 - 0.5) = 0.25
--> thus weight gets increased by 0.25 = new weight 0.75
--> 1 for A cause activity pattern was "011" if was "022" then A would have been 2 logically !!)
Category formation in vector 4 (implicit generalisation)
Orthogonalisation vectors 3 and 4 (their differences increased)
--> see lecture slides connectionism :D !!!
#
#
competetive learning VS pattern assocaitor :D !!
---> competetive network = unsupervised (doesnt require input or input pattern to learn )
pattern associatior = supervised because it needs external teachign eg. provided pattern input to learn it the first tiem
#
#
MAin advantages (page 133 mcleaod )
--> 2 of them !!! ADD THEM !!! love em live em lol haha
they can remove redundancy from a set of inputs by allocating a single output neuron to represent a set of inputs which co-occur
hey can produce outputs for different input patterns which are less correlated with each other than the inputs were. In the limit they can turn a set of partly Correlated patterns at input into a set of orthogonal (i.e. uncorrelated) patterns at output
LATERAL INHIBITION OF THE OUTPUT UNITS !!
--> only the most active unit fires !!
--> leads to orthogonalisation
-- > so makes the output units more different from each other !!
--> weights only change for winning unit making it less likely other cell will activate if same activity pattern presente again
--> leads to categorisation
--> so makes all winning output cells in a network more similar with each other , by adjusting their weights to become more active in the future if same activation pattern present !!
--> this puts them in kind of the same category :D!!
-----> all this reduces noise and thus leads to autoassociators being able to store more data /memories patterns etc as for example in the hippocampus model :D !!
Hebbian learning read ereader about neo habbian and differentaial hebbian learning (aso in lecture slides connectionism) + DRIVE ENFORCEMENT THEORY !! :D !!