Psychology of Language

Speech Perception

Speech Production

Language in the Brain

Semantic Processing

Lexical Processing

Language Acquisition

Behaviorist Approach (Nurture)

Nativist Approach (Nature)

Innate capabilities, genes determine language ability

Chicken or egg :: language ability or physical changes

Adaptations in human body since homo sapiens: lower larynx for variety of sounds, motor control of speech apparatus to have extreme control of mind/throat/mouth/airflow/lips

Evidence for Nativist Approach:

Pidgins = proto-language

early forms of language, simpler than full-blown communication, restricted vocabulary

Creoles are full languages derived from pidgins (often created by children)

Nicaraguan Sign Language

Children created their own sign language in the first school for the deaf in Nicaragua.

Specific Language Impairment

Found in people with normal IQ but deficits in speech and grammar (i.e. past tense marking and plural marking)

linked to FOXP2 gene (KE family), but not all people with SLI have it

Critical period of development

When people miss this period of language development, they tend to lack grammatical structure when they learn language later.

language skills are better in children with responsive mothers

we measure infant language development through infant sucking rate: babies increase their sucking rate and/or pressure with increased attention

from womb familiarity, newborns prefer to listen to their mother's voice and their native language due to prosody (pattern of stress and intonation in language)

babies prefer familiar stories

infants are universal listeners (they can distinguish phonemes from any language) until 10 months old. After that, they tune into only the phonemes present in their native language.

speech segmentation problem: silences in speech don't give cues to where the word boundaries are

infants use prosody (the pattern of stress and intonation in language) to identify syllables and words (i.e. two lips v. tulips), stress patterns can provide info about word beginnings and endings

Infant-Directed Speech: we speak to babies slower and more exaggeratedly. Babies are more interested in IDS and it increases attention and interaction.

Statistical learning: language can be viewed as a sequence of probabilities, and babies are constantly taking statistics of their native language to learn it.

transitional probability: between-word transitions are lower probability than within-word transitions

Development of Language Production

The "point and say" method is not sufficient because there is not enough stimulus, it is not completely clear what exactly is being pointed to (poverty of stimulus)

ex: point to a dog. are you pointing to a dog, an animal, the color, the ears, etc.

Basic-Level Bias: kids learn basic (dog) first before superordinate (animal) or subordinate (golden retriever)

Mutual Exclusivity: no two words have the same meaning, so babies can use deductive reasoning

common errors:

underextension: applying a word to only a specific case

overextension: applying a word to too many different cases

overregularization: using a regular morpheme for an irregular word

3 steps of Speech production:

  1. conceptualization (think about what you want to say)
  1. formulation (how to say it) -- lots of steps
  1. articulation

WEAVER++

a model for everything between conceptualization and articulation

treats speech production as a sequence of mental processes, one step at a time

steps

2. lexical conception (output: lemmas)

3. morphological encoding (output: morphemes)

4. phonological encoding (output: phonological words)

5. phonetic encoding (output: phonological gestural score)

6. articulation (output: sound wave)

1. conceptual preparation (output: lexical concepts)

interface between non-language thought and linguistic processes

lexical concept: an idea for which the language has a label (the output of this step)

choose from several words that convey the thought

lemma: a mental representation that contains both semantic and syntactic information (output)

select the morphemes depending on the exact meaning and grammatical structure meaning (context you are using the word in, including things like plural and past tense etc.)

morphemes: the smallest unit of meaning in a language

syllabification: process of mapping individual phonemes onto syllables to be spoken

metrical structure: emphasis for each syllable

phonological words: a set of syllables that is produces as a single unit (output)

how to move my muscles to produce sounds in the correct order

phonological gestural score: representation used by the motor system to create the actual muscle movements that will create the intended speech sounds (ouput)

the act of saying the words with the sound waves

articulators: parts of the speaker's body that can be moved to perturb the airflow to create sounds, including the lips, tongue, soft palate, and vocal folds

a sound wave includes the place of articulation, manner of articulation, and voicing

Coarticulation: gestures from one phoneme overlap in time with gestures for the preceding and following phonemes

evidence supporting:

tip of the tongue experience: occur when speaker has the right lemma activated but has trouble activating the correct phonemes

semantic substitution errors: related words become activated and are sometimes accidentally selected at the lexical concept level (i.e. Bush quote, innocent v. guilty)

sound exchange errors: single phoneme exchange during phonological stage (i.e. darn bore instead of barn door). positional constraint. lexical bias effect = this error usually produces real words

picture naming studies: common way to study speech production. naming pictures are easier when the word is a frequent one in your vocabulary (rabbit vs. chinchilla)

Spreading Activation

a model similar to WEAVER++ but more dynamic and flexible, with bidirectional and cascading feedback

lexical bias effect and mixed errors (words that are similar in sound and meaning) are evidence for feedback

self-monitoring and repair

Sound waves are comprised of...

amplitude: total change in pressure. perceived as volume

frequency: number of cycles per second (Hz). perceived as pitch

sound spectrogram: a graph of the distribution of frequencies over time for an acoustic signal

made up of formants (steady bands associated with vowels) and formant transitions (quick changes associated with consonants)

voice onset time refers to the time that a burst of air is forced through the mouth to produce a stop consonant relative to the time that the vocal folds start vibrating (shorter VOT = earlier voicing)

categorical perception:

lack of invariance problem: there is no one to one relationship between acoustic signal and phonemes

inter-speaker variability (across speakers)

intra-speaker variability (within a speaker)

coarticulation

theories that try to explain it:

Motor Theory

general auditory approach

analyze the sound wave to reconstruct the phonological gestural score

doing a reverse of weaver ++ when hearing phonemes to figure out what someone is saying

closer relationship between gestures and phonemes than between acoustic signals and phonemes

auditory perception illusions: visual input is so strong that it impacts audio input, and the motor theory explains it (duplex perception and mcgurk effect)

problems with motor theory:

  1. infants can perceive phonemes without being able to produce them
  2. its unclear how the acoustic signal is analyzed to extract the gestural score
  3. some non-human animals show categorical perception of speech as wekk as compensation for coarticulation
  4. we can categorize sounds that aren't speech
  5. if you can break up perception and production (i.e. after brain injury), how can they rely on each other so strongly

auditory perception isn't speech specific

normalization: comparing auditory input to prototype phonemes that we have as reference

bottom up processes: analyze the acoustic signals
top down processes: use info from long term memory to identify the best candidate from the set of potential matches

Word Meaning

Semantic Memory: storage of word meanings, concepts, and general facts (what are eggs?)

Episodic Memory: storage of events and their context (what did you have for breakfast?)

Lexical Semantics

sense = knowledge about a word directly. dictionary-like definition. does not depend on context.

reference = knowledge of the context of the word. what a word points to in a particular context.

Mental Lexicon: contains stored knowledge about words

Dictionary definition analogy: words are represented as entries in a dictionary, the definition includes the core features of the word.

problem: unmarried man, bachelor or monk? context is missing!

problem: words can have different meanings

semantic network theory: meanings are represented by patterns of activity in a network consisting of nodes (concepts) and links (related concepts)

spreading activation: activity at one node causes activity at other nodes via links

mediated priming: semantic network theory primes indirectly (lion primes stripes through tiger)

opposing theory: associationist approach

co-occurrence in word processing more important in priming

Embodied semantics: we know words based on experiences, have to ground words to experiences

affordances: possible interactions with objects

solves the symbol grounding problem

problem: what about abstract words and ideas?

lexical access depends on both meaning dominance and context

Lexical Access

the process of retrieving word information from long term memory in order to identify perceived word forms and their meanings

lexical representations: mental representation of words

sublexical representations: mental representations below the word level (phonemes, graphemes, features of words)

Models of Lexical Access

1st generation:

Logogen

words are represented as logogens. logogens fire when input activates it above its threshold.

Frequently Ordered Bin Search

word representations are organized into bins by morphological roots, and within bins by frequency of occurence

bottom up flow of information

processing semantically related words temporarily raises a logogen's activation level, which accounts for semantic priming effects

morphological decomposition: breaking down words into their individual morphemes and identifying the root

bottom up flow of information

2nd generation:

TRACE

COHORT

interactive model with both bottom up and top down flow of information and cascaded activation

word superiority effect: easier to identify location of letters when its a real word

lateral inhibition: pieces within a level can try to inhibit each other

visual input, feature level, letter level, word level

a letter can help you determine a word and vice versa

also known as the interactive activation model

spoken word only

3 stages:

  1. activation: initial auditory input activates many lexical candidates
  1. selection: more bottom up input and contextual info narrows down the number of activated words (find the right word)
  1. integration: syntactic and semantic info about the word, comprehend what it means in context

correctly predicts how much of a word is needed for lexical access

incremental interpretation

3rd generation:

Single Recurrent Network (SRN) and Distributed Cohort Model (DCM) contain aspects of TRACE and COHORT models, but combine both word form and meaning

explains semantic associations between words

Methods

fMRI

record blood flow during lexical access, measures the blood-oxygen level-dependent signal. when a part of the brain is working, it needs more oxygen.

WHERE things are happening

expensive and little information about when

ERPs

record electrical activity

WHEN things are happening

inexpensive but little information about where

measured by taking the average of many EEGs

N400: negative waveform at 400ms, sensitive to semantic manipulations (big N400 amplitude ex: the car only cost 2,000 dolphins)

P600: positive waveform at 600ms, sensitive to syntactic manipulation (big P600 amplitude ex: every monday he mow the lawn)

The left hemisphere is language dominant

we know where language functions because of brain damage, specifically aphasia. 3 forms of aphasia:

Broca's Aphasia: agrammatic speech, difficulty finding words, comprehension in tact

Wernicke's Aphasia: complex speech devoid of meaning, neologisms, poor comprehension

Conduction Aphasia: inability to repeat information, comprehension and production are just fine, problem maintaining phonological information, damage to arcuate fasciculus

Speech-language therapists work with aphasia patients to rehabilitate language processing, reflecting neuroplasticity (ability of brain to reorganize functions to different parts of the brain)

Classic Picture of the Brain = WLG (Broca's Area: production, Wernicke's Area: comprehension, Arcuate Fasciculus: connection between production and comprehension)

problems with the classic picture

lack of consistent mapping between lesion location and types of symptoms predicted (damage to the area doesn't always lead to aphasia, and aphasia doesn't always coincide with damage to that part)

there are other parts of the brain that show consistency with damage and aphasia (i.e. insula shows consistency with broca's aphasia)

Oversimplified

Other parts of the brain are shown to have a role in language as well: Visual Word Form Area (VWFA), Left Anterior Temporal Lobe (ATL), the right hemisphere (with prosody, discourse processing, non-literal language)