Please enable JavaScript.
Coggle requires JavaScript to display documents.
SPEECH AUDIOMETRY :silhouettes: (Articulation Scores/Articulation Index…
SPEECH AUDIOMETRY :silhouettes:
Speech recognition
How soft we hear vs. how well we understand when sufficiently audible?
Word rec
Speech discrimination
Speech recognition
Administration
Presentation = MLV or recorded
Score = #correct / #presented (at level of word or phoneme)
Articulation Scores/Articulation Index (Speech Intelligibility Index)
Articulation Index became SII in 1997
Audible speech cues weighted by importance function at each frequency
What percentage of speech cues are "audible"?
Mueller & Killion = Count-the-dot method
Plot audiogram and count the dots that are louder than their threshold = percent or estimate of the SII
Speech intelligibility scores as function of intensity = psychometric function and performance-intensity function
French & Steinberg = 1947 = intelligibility of speech sounds
Understanding of syllabus better with increasing intensity
Low pass filters: 750 Hz cut off = speech understanding was low, 7000 Hz as cut off = most speech information was there = close to 100 % understanding
More high frequencies included = greater intelligibility
Hearing up to 2000 Hz = word rec might not be affected BUT without 2000 Hz = word rec may be affected
Relationship between Articulation Index and Subjective measures of intelligibility
More speech cues available = better the understanding
Sentences reached max understanding/plateau first - don't need as high AI for sentences, sentences have a lot of redundancy
Speech Testing = Rehabilitative Purposes
Purposes
Treatment Plan (areas of deficit? How do we address areas of deficit?)
Prognosis (what will help? will it help? how much will it help?)
Baseline (Will results change implementing the treatment?)
Examples of assessments could include
Typical WRS
Sentence Identification abilities
Speech understanding in noise
Speech pattern identification abilities = MTS, rhyme test, MAC (might use these with Deaf clients)
Speech reading abilities
Speech Recognition is influenced by....
Speech materials
Familiarity
Redundancy
Open-set vs. closed-set
Noise
Presentation level
Administration
Carrier phrase (i.e. say the word = helps!)
Clarity of speech
Recording quality
Hearing loss
Audible dots
Frequency range
Type
Attention
Linguistic background
Word Intelligibility
Speech presentation increases = word intelligibility should also increase
Spondees = first
PB W22
Monosyllabic cues
PB50 Hughes Recording (recording quality makes this one tough)
PB Max = optimal performance/maximum score on phonetically balanced word lists
Phonetically balanced = frequency of phonemes in list is representative of frequency that these phonemes are used in American English
Word Recognition Testing
Scoring = % of words correct (can do phonemic scoring)
90-100% = excellent
80-89% = good
70-79% = fair
60-69% = poor
40-59% = very poor
< 40% = extremely poor
Phonemic Regression = poorer word recognition than anticipated based from puretone audiogram
Indicative of neural involvement
Rollover = multiple phonemic testing needed to have PB max = system breaks down due to neural fatigue (20% rollover from PB Max)
Test Size
W-22 = most difficult words first, first 10 correct then finished. If miss any out of the 10 words, then go to 25 (stop if no more than 4 errors), more than 4 errors then present all 50 words
NU6 = 10 most difficult, 0-1 missed then stop, 2 or more missed = present 25 if <4 you can stop but if miss >4 then present the full 50.
Presentation level
Guthrie & Mackersie (2009) = UCL-5 or 2k HL + SL
Masking for Speech testing
Speech testing = suprathreshold = far above the threshold and far above the other ear's threshold
Word recognition = students often forget to mask for word recognition = think about the difference of the presentation level and the threshold of the other ear
Necessary when chance of cross hearing
Speech in Noise Testing
Factors affecting ability to understand speech in a room
Speech signal intensity
Speech travels = energy is dispersed and intensity level decreases, varies based upon vocal effort
Distance
Inverse square law = signal intensity decreases with distance from source
Ambient Noise
Noise types (white noise = equal weighting at each frequency, pink noise = equal amount of intensity per octave, speech spectrum = low frequency weighting like speech intensity, multi-talker babble)
Noise level (dB SNR = 10log(I of signal)/(I of noise)) = normal hearing understand half of speech at 2-4 dB SNR
Hearing status and SNR = SNHL damage to hair cells, compromised resolution or definition which makes it difficult to separate signals within closer frequency range and from noise (upward spread of masking = difficult to separate it from noise)
Desired SNR (normal hearing = 7-10 dB SNR, SNHL = 15 dB SNR, children and older adults = same as hearing loss)
Reverberation
lingering of reflective sounds = echo = measured by sound decay in a room
Greater absorption of sound energy = faster decay (aka decreases with absorption because obstacles penetrate the sound)
Increases with room volume
Early reflection = 1st order = bounces off once and goes to speaker's ear, but 2nd and 3rd order reflections have lag times and less intensity because absorbed multiple times = blurs and masks the message
Linguistic Complexity
Emotional factors
Hearing loss
Examples of SIN tests
SPIN
SIN (QuickSIN)
Binaural = sound field is best, speech presentation fixed (70 dB HL) and noise level (4 talker babble) varies
Lower in contextual cues
the greater the SNR loss = the greater technological assistance needed
3 lists and average the SNR loss of each
HINT
Noise (speech spectrum noise) level fixed = 65 dBA, varied speech level (find intensity where understand 50% of sentences)
Higher in contextual cues
WIN