Please enable JavaScript.
Coggle requires JavaScript to display documents.
ARE FACES SPECIAL? :PENCIL2: B+B2 LECTURE 3 - Coggle Diagram
ARE FACES SPECIAL?
:PENCIL2: B+B2 LECTURE 3
The special role of face processing
Faces are extremely important for our social lives, starting from the earliest days of our lives.
We have extensive experience with looking at and interpreting faces.
Babies at very young ages can imitate faces from their caregivers.
The pivotal role of face information in our daily lives and our extensive experience with faces has led researchers to think that faces are a special stimulus class that is processed differently from non-face objects.
Faces are perceived in a more holistic manner, with an emphasis on feature configurations rather than individual features.
Face processing is specifically affected by particular cortical lesions, suggesting specialised processing mechanisms in the brain.
Face Inversion Effects
Our face processing system seems to be strongly tuned to faces appearing in their typical configurations.
One indicator for this tuning is the face inversion effect:
Face recognition is much worse when faces are turned upside-down - much more so than for other objects.
Largest error rate for inverted faces.
The Thatcher Effect
The specificity of face processing to typical face orientations is well illustrated by the Thatcher Illusion
(Thompson, 1980)
When a face is turned upside-down, processing deteriorates to an extent that we cannot appreciate even gross misrepresentations of individual features…
Originally demonstrated with Margaret Thatcher, but works with any face.
Parts and wholes for faces and objects
Faces seem to be processed holistically.
Less reliant on individual parts, more relying on the configuration of the individual face parts.
Part-whole experiment (Tanaka & Farah, 1993)
Train people on two different types of stimuli.
Either give them a face that is normally arranged or give them a face that is disarranged (features in the wrong positions)
Learn these faces → Test items
Give them two full faces or two scrambled faces and change one of the features (e.g. the size of the nose)
Ask: “which of these two noses belongs to ‘Larry’?”
Useful task - allows us to see how good people are in face matching and feature matching.
Results
If people have seen the full face as the probe items, they can be tested with two different conditions - either the isolated part or the full face condition.
People much better when they see the whole face again - e.g. “was Larry face A or B?”
Much worse on individual features.
Face processing relies on the whole face - less access to the individual features.
BUT… if trained on the scrambled face, the opposite pattern emerges.
They can't rely on holistic configuration in this case.
When scrambled faces are shown, features are better accessible, but discrimination for the full face is worse.
Processing based on parts versus the whole differs across objects categories (Farah, 1991)
Face processing seems to rely on the spatial configuration of the whole face, while word processing relies on the combination of individual elements; objects are somewhere in the middle.
If this was true, deficits in face processing and word processing should not go together very often.
Indeed, face agnosia (prosopagnosia) and word agnosia (alexia) go together with object agnosia fairly often, but rarely with each other (while sparing objects)
Evidence for two possible different processing modes - holistic and feature.
Prosopagnosia
Face blindness
Results from lesions in the cortical face processing network.
The lesions often affect the fusiform face area (FFA), but they do not need to.
In addition to acquired prosopagnosia, there’s also a congenital form of prosopagnosia (tutorial)
Prosopagnosia patients have specific face recognition deficits, while other objects categories are often largely unimpaired.
Patient WJ cannot identify faces, but is essentially perfect for other object categories.
Interestingly, WJ who is now a sheep farmer, is extremely good at discriminating sheep, both his own and unfamiliar ones.
Shows that it isn’t just because he lacks the ability to process detail.
Holistic processing deficits in Prosopagnosia
Prosopagnosia patients show no (or even inverted) inversion effects for faces, as if there was no holistic processing based on orientation.
Part based processing?
Does that mean that holistic processing is face-specific?
Object Agnosia
Part-based deficits
Moscovitch et al. (1997)
Patient C.K. has object agnosia and dyslexia, but intact face processing.
C.K. can perceive faces, but not their individual components.
Can only use holistic configuration.
Neural mechanisms of face processing
Fusiform face area and the occipital face area
Regions involved in the perceptual analysis of faces - key for recognising who we have in front of us.
Anterior temporal lobe → Grandmother neurons
Fusiform gyrus → Mooney faces
Occipital face area → symmetry
Face analysis along the processing hierarchy
MEG results show a progression from more simple to complex face attributes, with identity emerging last
(Dobs et al., 2019)
When visual similarity is controlled for, face identity representations are only seen later during processing
(Ambrus et al., 2019)
Suggests that truly invariant representations of familiar face identity emerge only later during cortical processing.
Holistic processing in the brain
FFA responds more strongly to upright faces than to inverted faces, mirroring behavioural face inversion effects
(Yovel & Kanwisher, 2005)
FFA inversion effects may be related to perceptual face inversion effects.
Participants who have higher face inversion effects in the FFA also produce higher face inversion effects in behaviour.
Seems to be mapping between how strong the holistic processing mode in the FFA is and how strong these effects play out in behaviour.
When measured with EEG, face inversion impacts the face-selective N170 processing stage →
N170 key stage for perceptual face analysis
.
After you’ve seen the stimulus, you have a negative deflection and you get this deflection across several different stimulus categories (e.g. faces, cars, furniture)
Consistently bigger for faces than for other objects.
Show upright and inverted faces → the difference in this N170 is that you have a more strong negative deflection for inverted faces than you would have for upright faces.
As early as
170ms after you’ve seen the face, there seems to be a difference depending on whether it is in a typical upright configuration or whether it is inverted.
N170 specific marker of face processing in the brain.
Further research
FFA engages in holistic processing.
By contrast,
object-selective regions in posterior fusiform gyrus
are more strongly responding to the scrambled faces, highlighting the more part-based nature of object processing.
Together, these results suggest that face processing in the FFA is tuned to the holistic analysis of facial composition.
Supported by lesions in the FFA - deficit in holistic processing.
A cortical hierarchy for face processing
Bruce and Young (1986)
Proposed a cognitive model for face processing that consists of distinct processing stages.
At first, structural analysis is fed to face recognition units and further to person identification nodes that are linked to memory representations.
Haxby et al. (2001)
Translated the model to neural processing.
There is a hierarchical mapping between brain regions along the
ventral visual stream
with postulated steps of perceptual face analysis and face recognition.
Structural encoding for faces roughly corresponds to encoding in the
occipital face area
→ early analysis of perceptual face / facial features.
Expression independent analysis → recognition of the actual face → activity in the
fusiform gyrus
(where we activate more invariant aspects of faces and potentially already some features that allow the diagnostics to select specific identity)
Anterior temporal cortex
→ neurons containing information about a person's identity / name → links visual representation and memory.
Perception of changeable features has been linked to the
superior temporal sulcus
(e.g. gaze, emotional expression)
Linked to two regions of the limbic system (
amygdala and insula
):
Amygdala
→ where facial expression is evaluated from an emotional standpoint.
Superior Temporal Sulcus (STS)
Supports expression recognition
To probe the role of the STS in face processing, TMS was applied while participants performed two types of face discrimination tasks.
In separate trials, participants were asked to either indicate whether two faces showed the same identity or whether two faces displayed the same emotion (discrimination task)
At the target surface, TMS is applied to the STS, or the vertex (top of head) as a control.
STS should be responsible for the changeable expressions.
So stimulating the STS (and thereby deactivating it) should not affect the identity task, but should affect the expression discrimination task.
Results
TMS to the superior temporal sulcus impairs expression discrimination, but not identity discrimination (compared to stimulating a vertex control region)
Lower accuracy for expression when right STS is stimulated with TMS.
Supports the dissociation between analysis of static and dynamic face attributes.
Face recognition without an Amygdala
Patient D.R. had amygdala removed (
Young et al., 1995
)
D.R. can still recognise familiar faces learned before the operation, but is impaired in new face learning.
Role of the amygdala in face learning.
D.R. is particularly impaired in gaze and expression recognition, which are diagnostic of facial emotions.
Changeable face attributes → role of amygdala in this type of perception.
-
Amygdala is involved in acquiring face representations and analysing dynamic changes in faces.
Summary
Cortical network of regions that support different stages and aspects of face processing, with a broad division into
static
and
dynamic
face attributes.
Many of these mechanisms are not fully understood and continue to be investigated to understand how we perceive faces.
Face specificity versus expertise
One challenge for the idea of face specificity in cortical processing comes from studies of visual expertise.
Bird watchers - high levels of stimulus similarity (birds are difficult to distinguish between unless you are an expert)
such studies find that face-selective cortical regions (primarily the FFA) are activated in visual experts (more so than novices).
Not face-specific then?
Bird experts activate the FFA when seeing birds, and car experts activate the FFA when seeing cars.
This suggests that the FFA subserves fine-grained discrimination of expert categories - and faces seem to just be one of these expert categories.
The FFA activation of experts can be related to a different, more holistic mode of visual processing.
Indeed, in visual short-term memory tasks, car experts show inversion effects for cars akin to ones observed for faces.
Can hold more in STM when face image is not inverted, true for both car novices and car experts.
But when an image of a car is inverted, car experts can hold more correctly orientated images in their STM than car novices, but car novices seem to be able to retain more inverted images than ones that are the correct way up.
Due to holistic mode of processing?
Alternative Explanation
Causality...
Perhaps car experts like cars because their FFA likes them
Tested by training people to become experts in one visual category.
Researchers trained participants on novel stimuli (“
greebles
”) that can be discriminated through fine-grained feature differences (e.g. the curvature of their features)
Classified on family and individual level.
Greebles activate the FFA after participants were trained in extensive discrimination tasks.
This suggests that FFA activation can result from experience with a particular stimulus class that requires fine-grained discrimintion at the centre of the visual field.
FFA reliably activated when experts look at the Greebles.
Suggests that experience drives responses in fusiform face area and not specifically faces
Similar effects were reported in people that have extensive experience with discriminating Pokemon characters at the centre of their visual field.
These participants recruit similar regions of the ventral temporal cortex as the ones used for face processing.
But expertise with Pokemon also recruits different regions…
Strong activations, overlap with face processing to a certain extent, but more lateral regions on the ventral temporal surface are also activated.
Not identical activation to face processing (defence for face specificity)
Face processing without face experience
In animals...
Monkeys raised without seeing faces, raised alone.
Researchers wore masks - monkeys had never seen a face up until the experiment.
Ethics - social deprivation, would be unethical to test this way in humans.
These monkeys showed reduced looking preferences towards faces
(Arcaro et al., 2017)
Face-deprived monkeys just looked at faces as though they were any other object on the scene.
These monkeys also lack face-specific activations in their visual cortex (they have no “face patches” - regions considered the homologues of the FFA, OFA and STS in humans)
These findings suggest that experience is needed for face processing to develop.
In humans...
We can study humans (9-17 year old children) when vision is restored after early-onset cataracts, a defect of the eye lens that essentially leads to near blindness.
No face experience.
After vision is restored, face detection is poor, but gets better over time.
Seems that some visual experience with the face category is needed - not innate.
But can be learned very quickly (in a matter of months)
This suggests that face discrimination needs to be learned, but can be learned at any time, and can be done so rapidly.
No sensitive period for learning.