Chapter 8: Perception for Action

inaction

optic ataxia described as a visuomotor deficit; a deficit in the ability to produce appropriate movements in response to visual information

shows that appropriate action requires more than just accurate perception and proficient motor control- also requires the ability to link the two, to rapidly coordinate perpetual information with motor commands in order to produce actions

some argue that conscious awareness of objects and scenes is nothing but side effect of primary need to haver perceptual system guide behavior

usually think of perception as something that happens before action

recently, developed idea that perception is something that we actively do and that perception and other forms of action happen together

each adjustment in action produces new visual information to be taken into account

animals judge distanced by small actions that generate changing perceptions (like changing perception due to motion parallax) to provide critical information about layout of a scene

vision affects action

visual feedback:
visual information used to control an ongoing movement

involves complex and precise coordination of incoming visual information with outflowing muscle commands

time to process visual feedback

optic flow

prism adaptation

participants draw horizontal line on piece of paper that wraps around rotating bass drum called kymograph; paper visible through tiny slit

errors made by eyes-open participants got worse as metronome got faster and participants’ speed (movements per minute) increased

speed-accuracy tradeoff:
as the speed of a movement increases, its accuracy decreases

errors in eyes-shut participants much larger at slower speeds, but advantage of eyes-open condition decreased as the speed of movement increased

ability to use visual feedback decreases as speed of movement increases

using visual feedback to guide movement involves processes that take some time, like perception of relative positions of limb and target of movement, adjustment of muscles commands to hand to correct for any inaccuracy detected

estimate that visual feedback processing time between 190 and 260 msec, but if consider the fact that feedback from the beginning of a movement is probably not very useful in adjusting for errors near the end of a movement, time needed to use visual feedback is probably somewhat less than that

use visual feedback to guide hand movements and to guide body as move through environment

optic flow comes into play whenever moving through a scene

also provides information to guide actions like navigation, including information about heading (direction in which you’re moving) and balance and body orientation

retinal image flowing outward from center called focus of expansion

if eyes move as walk forward, optic flow pattern becomes more complex, yet can still judge their heading accurately

determining heading lets you determine whether your heading matches your goal- whether you’re navigating correctly by moving in the direction you intend

when change optic flow pattern in display participants are seeing so that focus of expansion is displaced from where it would naturally fall, participants changed their path in a way that depended both on the location of the goal in the display and on the location of the focus of expansion within the pattern of optic flow

also helps maintain stable upright body orientation

been demonstrated experimentally using room with four walls and ceiling that are extended from ceiling above

if room moves abruptly toward person, person experiences optic flow that carries (incorrect) information that person is swaying toward the wall, and compensate by staggering backward

visual feedback can help in adjusting ongoing movement or navigating through environment

visual information about the success/failure of completed movement can help us make subsequent movements more accurate, illustrated by situations where people use prism goggles

at first, make large errors in reaching for/point at objects and in executing more complex actions, but adjust quickly and are soon almost as accurate as without goggles

prism adaptation:
adaptation to the inaccurate visual feedback obtained when looking through a wedge prism; reveals a mechanism that helps us compensate for inaccurate movements

shows how quickly we can use visual information to help us recognize errors in our movements and correct future movements

shows that we can compensate for changes in the correspondence between the apparent visual location of an object and what must be done with limbs

involves at least two components

a change in visually perceived directions

a change in the felt position of the arm

operates any time we make slightly inaccurate hand movement for which we must compensate; serves to maintain properly calibrated eye-hand coordination, helping ensure we can successfully complete movements

action affects vision

action plans

action capabilities

visual processing in perihand space

action-specific perception

right before make perception and movement, decide that you want to complete action and form action plan that includes goal

need to take into account information about object interacting with

planning an action such as picking up a cup increases a person’s sensitivity to the visual features important for completing the action

in an experiment involving conjunction search, given cues to two visual features (color and orientation) of target object and then asked to either point at object or grab it

fixation point indicated color of block while tone indicated orientation

when task was grasping, participants looked at object with correct orientation first significantly more often than when task was pointing

planning to grasp an object sharpens people’s perception of object orientation because information about object orientation is important for accurate grasping but not for accurate pointing

may conclude that actually see differently depending on the actions that we plan to perform

in another experiment, viewed two nearly identical displays in alternation, trying to see one difference in either large object or small object

when large object changed, responded faster when asked to squeeze large handle than did when asked to pinch small button

planning an action that involves grasping a small object primes the visual system to see small objects, and planning an action that involves grasping a large object primes the visual system to see large objects

ability to produce specific actions changes from moment to moment

visual system biased to acquire information needed for successful completion of planned action, but different if not planning on making any specific action

find changes in ability to produce an action, even if don’t plan to actually perform action, can change what is seen

when hands near object, could pick it up, making information about nearby object potentially relevant

processing of visual information about nearby objects is different when hands are near them than when hands are far away from them, a difference that is through to facilitate potential actions, the actions you could perform if wanted to

perihand space:
the space near the hands; visual processing of objects there is special

experiment with WM, who was unable to detect stimuli presented in left visual field because of damage to right visual cortex

when hands down at side, could detect information on left side of display very infrequently

when hands at sides of monitor, could detect information on left side of display slightly more often because left side of display was in perihand space

spatial frequency:
a measure of the amount of fine detail in visual images; images with high spatial frequency contain a lot of fine detail; images with low spatial frequency lack fine detail but capture the overall shapes of objects

in experiment, participants responded to stimuli called gratings where striped patches had either high spatial or low spatial frequency information

had to push button indicating whether stripes were tiled to right or left

when stimuli were in perihand space, participants were better at seeing the tile of the low-spatial-frequency stimuli

when stimuli were not in perihand space, participants were better at seeing the high-spatial frequency stimuli

since low-spatial-frequency information typically more useful than high-spatial-frequency when want to act on something near hands, confirm that visual perception of objects in perihand space is different in ways that can help you act on the objects

action-specific perception:
the theory that our perceptions are shaped by our ability to perform actions

rejects idea that perception provides object representations of environment

take action capabilities into account

some support provided by research on perception of sizes and distances

when asked what size softball was after game, players who had batted better thought ball was bigger than did those who hadn’t batted well

group holding backpacks or not asked to estimate distance of space in front of them

both tended to underestimate actual distance but those who wore backpack judges distances to be greater

could be evolutionarily advantageous when deciding whether or not to undertake an action which could seriously deplete metabolic resources

possible that experiment results actually due to demand characteristics

demand characteristics:
characteristics of an experiment that might lead participants to respond differently than they otherwise would have

may have responded differently if weren’t in experiment

possible that backpack participants thought further distance because assumed that is what the experimenters expected, whereas the group wearing backpack but told that it contained equipment to monitor information didn’t think it was as far because didn’t think experimenters expected them to estimate it as further because had been given plausible explanation for backpack

neural basis of perception for action

the role of the parietal lobe in eye movements, reaching, and grasping

bimodal neurons and hand-centered receptive fields

handheld tool use

mirror neurons

dorsal pathway (where/how) travels through area MT and into parietal lobe

monkeys with lesions in parietal cortex were unable to carry out tasks related to knowing the location of objects

DF with damage to temporal lobe was unable to carry out tasks such as identifying orientation of slot but could perform actions that involved the coordination of visual information needed to guide action

position of parietal lobes makes them well suited for purpose of guiding action because very near motor regions of frontal lobe

parietal lobe divided into two major parts, separated by the postcentral sulcus

anterior parietal lobe- contains a somatosensory representation of the body’s surface; brain’s map for touch

posterior parietal lobe- contains variety of functionally specialized regions that support perceptually guided action and other functions

major anatomical landmark is intraparietal sulcus (IPS) which runs through middle of posterior parietal lobe and divides into the superior parietal lobule and the inferior parietal lobule

lateral intraparietal area (LIP):
a region of the posterior parietal lobe in monkeys that is involved in the control of eye movements, including intended eye movements; an analogous region exists in the human brain

lies at about the midpoint of the IPS and contains neurons that respond to visual stimuli in their receptive fields

activity is sustained during delayed-saccade tasks when monkey holding in mind the location of a target spot to which it will be required to make an eye movement

when target spot first appears in receptive field of LIP neuron, stimulates strong response in LIP neuron because LIP neurons driven by visual stimuli in their receptive fields

when target spot disappears, lower response but still well above baseline, reflecting its role in maintaining a representation of the goal of the intended eye movement

after monkey makes saccade to where receptive field was previously, firing returns to baseline because monkey no longer maintaining the location of the neuron’s receptive field as the goal of an intended eye movement

human brain also has region analogous to LIP in monkeys that appears to be specific for representing the direction of an upcoming eye movement

region contains neurons whose activity reflects an intended eye movement

neurons respond both to spatially specific visual input and to spatially specific planned eye movements

medial intraparietal area (MIP):
a region of the posterior parietal lobe involved in planning reach movements

sometimes called parietal reach region; contains neurons that are active when planning a reach to a specific location

crucially involved in determining the direction of visually guided reaching movements

another motion-selective brain region called medial superior temporal region (MST)

sometimes cursor monkeys supposed to move to specific spot moved in conjunction with direction joystick moved in; sometimes moved opposite

MST neurons responded selectively according to the direction of motion of cursor on screen, no matter direction of joystick movement

MIP neurons responded according to the direction of the hand movement controlling the joystick

supports ideas that MIP neurons selectively tuned for production of reaching movements in specific directions

anterior intraparietal area (AIP):
a region of the posterior parietal lobe thought to be involved in grasping movements

appear to control hand configuration needed to pick up different objects

different AIP neurons tuned for producing different kinds of visually guided grasping movements and for different kinds of object shapes that would require different types of grasps

visually guided hand movements such as grasping objects/pushing them out of the way only possible if have high-quality information about locations and features of objects in perihand space

bimodal neurons:
neurons that respond to two modalities of stimulation—for example, neurons in monkeys’ promotor cortex that respond to tactile stimulation of the hand and to visual stimuli near the hand

premotor cortex known to be involved in control of action

uniquely suited to support the execution of actions on nearby actions

experiment where monkey looked straight ahead held hand either to the left or right of direction of gaze

movement directly toward money’s hand produced the maximum response in the neuron, regardless of whether the hand was to the left or right of center as long as stimuli moved toward where hand was held out

neurons also responded to tactile stimulation applied directly to the hand

interesting feature of neuron’s visual receptive field

with straight-forward fixation, objects move along different trajectories stimulated different retinal locations

neuron’s response based on objects location relative to the hand, not on the retinal location stimulated by the object

also said to have hand-centered receptive field

hand-centered receptive field:
the visual receptive field of a neuron that responds to visual stimuli near the hand, based on the location of the stimulus relative to the hand, not on the retinal location stimulated

question of whether bimodal neurons with hand-centered receptive fields might have evolved the capacity to expand their receptive field in support of tool use

do receptive fields expand to include locations near handheld tool in order to support control of visually guided actions with tools

neuron that included area around monkey’s hand had extended to include entire length of tool once monkeys had been taught to use it to reach food out of reach

enhanced control that these bimodal neurons provide for visually guided actions by the hand is naturally extended to control actions with a handheld tool

increase in ability to see stimuli on left side of display seen in patient WM happened again when given tool

when unable to reach display, not able to see stimuli on left side but when given tool that could almost reach display, improved ability to see stimuli on left side

space that perceptual system treated as perihand space had been extended to include the space around the tool

bimodal neurons provide perceptual information that other neurons use to initiate the hand movements needed to execute visually guided actions, but don’t themselves directly make the hands move

expect to see neurons that participate in both perpetual system and action system

find evidence supporting this conclusion, but very controversial and not yet able to draw any conclusions

in original study, recorded individual neurons in area F5 in ventral premotor cortex of monkeys, known to be involved in the control of movement

showed response when monkeys reached for food

also responded when researcher reached for food; suggesting neurons involved in not only producing an action, but also perceiving the same action performed by others

mirror neurons:
neurons that fire when an action is produced and when the same action is observed being produced by others

sensitivity of neuron depends on the degree of similarity of observed action

some think mirror neurons provide way to better perceive and understand actions of others

if significant areas of the brain contain mirror neurons, then the brain mechanisms involved in perception and action may not be as separate rom each other as thought

questioned on grounds that action understanding is very tricky

mirror neuron may not be able to make distinctions for different intentions for single action