Please enable JavaScript.
Coggle requires JavaScript to display documents.
VISUAL SEARCH :PENCIL2: P+C2 LECTURE 2 - Coggle Diagram
VISUAL SEARCH
:PENCIL2:
P+C2 LECTURE 2
Classic Search Experiments
Treisman & Gelade (1980)
Procedure
Participants have to find a target amongst distractors
E.g. Search for red dot amongst green dot.
Next level - have to find a red horizontal line amongst distractors (red vertical lines, green vertical lines, and green horizontal lines).
Factors Varied
Number of distractors (size of the array)
Presence or absence of targets (positive or negative trials)
Targets
Search for target distinct from distractors by one feature colour (e.g. find red circle) -
feature searches
OR
Target had two conjoined features (find horizontal red bar) - the combination of which makes them distinct from the distractors -
conjunction searches
For 50% of the trials there will be no target.
So what you are searching for just isn't there.
Two types of search:
Basic search which is looking for a target that is distinct from the distractors by one dimension only. (e.g. colour)
Not really searching here, it
pops out
Complex search - find two conjoined features (e.g. colour and orientation)
Results
Increased reaction time as the number of items goes up for a single-feature search.
Conjunction searching:
RT goes up when the target is absent and as the number of items increases.
Strong correlation between the number of items in the array and the reaction time.
Increase in RT for negative single-feature trials.
Increase for positive (target present) and negative (target absent) conjunctive-feature target trials, which increases further as display size increases.
Display size has a much bigger effect for conjunctive targets than for a single feature target.
How many milliseconds each item adds to the reaction times for different conditions...
Conjunctive Search negative trails = 67.1
Conjunctive Search positive trails = 28.7
Single Feature positive trails = 3.1
Single feature negative trails = 25.1
10 milliseconds is the cutoff for what we class as efficient searches, or what we call ‘pop out’ searches.
Y = mx + c:
C = intercept/constant
Where the line crosses the Y-axis
Time it takes to do the initial processing and then to produce a response.
The time it takes to do everything, all the cognitive processes and physical processes that have nothing to do with the actual search.
The things that lead up to the search and the things that follow on from the search, but not including the actual search itself.
That’s a constant - the time it takes you to notice the stimuli and react to the stimuli is constant across all trials.
What is changing is how long it takes to search.
M = how steep or otherwise the line is
M is the time taken to perform the search.
The number of milliseconds that each item adds to the search.
The steeper the slope the longer the search by item.
If M is low, the search is faster and more efficient
X = the number of items (variable)
SO...
y
(reaction time) =
m
(milliseconds per item)
x
(number of items) +
c
(duration of non-search processes)
:
Feature Integration Theory
Two processing stages:
Initial stage feature detection:
Is parallel, fast, efficient (10ms per item)
“Pre-attentive”
Single features “pop out” (e.g. colour, orientation, etc.)
Search fast and efficiently
Feature integration
Gluing the features together
Attentional focus (spotlight) is the glue
Serial processing needed
Search is slow, inefficient
Array size makes a difference.
Is FIT how visual search works?
Looking for a black X on an array of black Os and white Xs
But that is a conjunctive stimuli - colour and shape
But the black X pops out
Not readily explainable by Treisman’s theory.
There are some phenomena that the FIT model cannot explain.
Quad-modal distribution:
The mean of
feature present
should be less than --> the mean of
feature absent
and that should be less than --> the mean of
conjunction present
and that should be less than --> the mean of
conjunction absent
(for the sloped values)
But it looks like this...
No bimodal distribution
No difference between feature and conjunction targets.
Wolfe's Model of Guided Search
First information is extracted from the stimuli through input channels (
bottom-up
).
Information is used
top-down
to control and organise visual search.
Two processes.
Treisman’s model is a very top-down approach.
Brain makes the search faster by excluding half of the elements from the scene.
Stimulus --> Input channels -- > Feature maps --> Activation maps
The
stimulus
is filtered through broadly-tuned "categorical" channels.
The output produces
feature maps
with activation based on local differences (bottom-up) and task demands (top-down).
A weighted sum of these activations forms the
activation map
.
In visual search, attention deploys limited capacity resources in order of decreasing activation.
Guiding attributes (Wolfe, 2004)
Non-feature:
Conjunctions, Category, Identity, 3D shape, Threat
Maybe:
Lighting (shading), Glossiness/Lustre, Number, Aspect Ratio
Probably:
Depth, Luminance, Closure, Curvature
Doubtful:
Novelty, Letter Identity
Definitely:
Colour, Motion, Orientation, Size
Possible guiding attributes
Shading and Lustre
-
Instagram face
= shiny, heavily contoured/shaded = pops out in a set of other faces
Top-down information:
Better at searching for items in naturalistic settings - can immediately exclude some options for where an item might be.
What drives search?
Bottom-up (elements of scene)
Salience
Differences amongst the targets and distractors.
Attributes
Elements that capture the deployment of attention.
Top-down
Scene properties
Values
What else matters?
Attentional Engagement Theory
Efficiency of the search (slope) is based on aspects of the task
(Duncan and Humphreys, 1989)
Target/Non-target similarity
: How easy it is to notice the difference between the target and the Non-target.
Non-target/Non-target similarity
: How similar are the distractors?
The surrounding non-target items affect search.
If heterogeneous (all different) the search becomes more inefficient (longer/slower)
All the target stimuli defined on one dimension…
Should ‘pop out’ in Feature Integration Theory (FIT)
The factor influences conjunction searches as well.
Not considered in FIT model.
Wolfe's Model: Two Pathways
Selective Pathway
Leads to recognition/identification of stimulus’ elements
Has the traditional bottle-neck
Non-selective Pathway
Extracts basic semantic information from scene
“Gist” of the scene
Provides guidance to the selective pathway.
What else guides search?
Values
Anderson et al. (2011)
Training:
Participants had to say what the orientation of a line was inside a red or green circle.
They were rewarded more highly for red circles, less so for green (or vice versa)
So one colour was associated with high reward and the other was not.
They completed 1008 trials to build up the association between the colours and the chance of reward.
Test phase:
Participants look for unique shapes amongst 6 objects.
Half of the time one of the distractors was red or green.
High value distractor
Low value distractor
Neither
Are participants more distracted by the presence of the high value vs. low value distractor?
Is attention captured by a high value distractor?
Results:
High value distractor increased RT over absent condition.
Individual differences mattered…
Low working memory scores correlated with higher distraction.
-
Higher impulsivity scores correlated with higher distraction.