Please enable JavaScript.
Coggle requires JavaScript to display documents.
Task 9 Response lecture Men an machine (Task 5 ACT-R (rational) -->…
Task 9 Response lecture Men an machine
Task 1
documentray
CRUM
LOOK IT UP AGAIN IN LCTURE RECORDING
Paul thagard & CRUM
computatuonal power
psychlogical , neurological, cognitive etcetc plausability
Maar & Tri level hypothesis
computational level
--> what problem solved
algorhitmic / procedual level
--> how are they solved?
--> whats the programmign language
Impletational level
--> focus of neuropsycholigsts
Natural computation
Biological data compression
look up( efficiency?)
--> e.g. visual processing in retina (LOOK UP AGAIN)
#
enables larger behavior reportrough
faster decision
reliable transmision
still susceptible to noise so redundancy needed!!
#
look up (Redundancy?)
--> e.g. language (words with masked letter could mean many things)
redundancy is IMPORTANT cause REDUCES NOISE !!! (minimizes error ?? (LOOK UP AGAIN)
redundancy coding
--> e.g. subway station colored lines where train comes or bumps in sidewalk at road for blind people or also train station
IBM's watson good example
Task 2 Brain computer Interfaces
ratbots
classical architechtures
rulebased expert system (like ACT-R)
serial processing
sensitive to damage cause local representation (turing machine )
inflexible digital procesing
mostly specifically programmed for 1 specific task
Connectionism
parallel processing
graceful degredation
flexible
discovery general learning rules
biologicall plausibe
network architecture
auto associator
set of nodes and
interconnected
(so is A recurrent connection itself and is needed for autoassociators :3!! )
--> heabbian learnign or delta rule
pattern associator
Two layer
one input layer one output layer
Three la
text
yer
--> inout, hidden, output layer
backpropagation (version of delta rule )
--> computes error of the hidden nodes
--> once know delta rule can be applied on lower nodes
#
multi layer feedforward
reccurent networks
#
#
feeds info from output nodes back to input nodes which then adjusts the connections of the input nodes with outputs nodes (backpropagation / delta rule <-- which ?)
Competition, lateral inhibition (plus one more )
look up parallel constraint satisfaction again !!
-->activity send back and forth in network and adapts state of networs until at some point there will be a "winner activation" of the network which then becomes the outcome (was example with names in the book :p )
learning
vs training
learning
unsupervised
--> neo hebbian
Supervised
--> delta rule , and something else
training
Supervised
unsupervised
actiivation
liniar
Hopfield network
--> links connectionism with dynamic systems
--> local minima and global minimum (driven to those through parallel constraints :3 )
#
---> task 4 creativity :D ??
Energy
local minima
global minima
Weight space vs activation space
weight space
hebbian learning
--> put all the problems with all the types
neohebbian learning
differential hebbian learning
advantage = only local info needed , no error calculation needed --> thus unsupervised learning :3 !!
delta rule
error needed (thus backpropagation / reccurent network needed)
--->thus SUPERVISED learning BECAUSE input that there was an error NEEDED and based on the error weights are adjusted :3 !!
2 layer network, gradient descent
activation space
DIFFERENCE BETWEEN RULE BASED AND CONNECTIONIST MODELS !!
---> rule based !!! always a distinction in modules eg. working memory, declerative and procedul memory
----> in connectionist models ALL THAT DOESNT EXIST !! EVERYTHIGN IS DETERMINED BY THE WEIGHTS BETWEEN THE UNITS :D!!!!
Task 3 :D
catastrophic interference + one trial learning
-->connections change so that one could be forgoten :P?? wtf
One trial learning
delta rule takes many trials to learn
but humans can learn just after one trial
Hippocampus model
--> explains hippocampus learning :3 !!
EVERY PART THERE USES HEBBIAN LEARNING !!! EVEN AUTOASSOCIATOR :D!!
whats "Sparse symbolical encoding " or something
task 4 Creativity
defocused attention
simulted anneling
getting out of local minima
Bodens model
Example chess Values
You need a evaluation function
Iterative search
--> looking for a move, based on current condition, that allows to increase value ( so moving upwards the evaluation function)
---> same as "hill climbing" in ACT
Klondike Analogy
oasis problem
Task 5 ACT-R (rational)
--> rule based
--> production rules
--> through rational analysis = adapted to environment
rule based ( if x then y)
production rules
#
--> only one can fire at a given time (or soemthing)
production compilation
--> SOAR model during chunking
History
Xstar (or something)
DIFFERENCE TO ACT R??
<-- KNOW IT !!
--> through rational analysis = adapted to environment
--> also can process information in parallel :D !! (BUT as with all rule based systems only one of all the rules (the one that best fits) can be executed at a given time
based on human cognition
--> declarative learning
--> procedural larning
learning = aquireing new production rules
Chunks
goal buffer
--> through rational analysis = adapted to environment
#
--> also can information in parallel :D !!
ACT-R Assumption space
---> SUUPER IMPORTANT!!!!!
Performance
declerative vs procedual
symbolic vs subsymbolic
learning
declerative vs procedual
symbolic vs subsymbolic
"do counting" vs "do addition"
Task 6
Dynamic system (thagard)
Task 7 Emotions
emotion
uncanny valley
paro the robot
autistic robot
kismet robot
super well explained in the response lecture recording :3 !!
Discrete model (fear conditioning / learning)
dimensional model (kismet robot)
#
EMA model (the one i printed out :3!!)
--> emotions can be caused by System 1 (automatic) or system 2 (deliberate)
based on Appraisal theory
Task 8 Human factors (neuroergonomics etc etc)
--> basically about human machine interaction:3 !!
function allocation (dough maker presentation ereader)
swiss cheese model
--> catastrophe happens when all the holes in a swis cheese align haha xD
airplane crash in canada example
Task 9 Collective intelligence
next gen Ai
embodied
collective
Adaptive
tamed
kill switches
firefighting robots
robots can have secreets (google AI creating its own encription)
roots developing their own language