Linda W Norrix
- Part-Time Faculty
- Professor Emeritus
Contact
- (520) 621-4720
- Speech And Hearing Sciences, Rm. 214
- Tucson, AZ 85721
- norrix@arizona.edu
Degrees
- Ph.D. Audiology
- The University of AZ, Tucson, Arizona, United States
- Distortion product otoacoustic emissions created through the interaction of spontaneous otoacoustic emissions and externally generated tones
- M.A.
- Denver University, Denver, Colorado, United States
- B.S.
- University of WI-Madison, Madison, Wisconsin, United States
Work Experience
- Department of Speech, Language, & Hearing Sciences (2009 - Ongoing)
Licensure & Certification
- American Speech Language Hearing Association (1988)
- AZ Licensed Dispensing Audiologist (1993)
Interests
No activities entered.
Courses
2023-24 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Spring 2024) -
Clinical Stds:Audiology
SLHS 459 (Spring 2024) -
Clinical Stds:Audiology
SLHS 559 (Spring 2024) -
Adv Clin Stds:Audiology
SLHS 659 (Fall 2023) -
Clinical Stds:Audiology
SLHS 559 (Fall 2023)
2022-23 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Summer I 2023) -
Clinical Stds:Audiology
SLHS 559 (Summer I 2023) -
Adv Clin Stds:Audiology
SLHS 659 (Spring 2023) -
Audiology Doctoral Project
SLHS 912 (Spring 2023) -
Clinical Stds:Audiology
SLHS 459 (Spring 2023) -
Clinical Stds:Audiology
SLHS 559 (Spring 2023) -
Pediatric Audiology
SLHS 586 (Spring 2023) -
Adv Clin Stds:Audiology
SLHS 659 (Fall 2022) -
Audiology Doctoral Project
SLHS 912 (Fall 2022) -
Clinical Stds:Audiology
SLHS 559 (Fall 2022) -
Lab Adv Audiologic Eval
SLHS 589L (Fall 2022)
2021-22 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Summer I 2022) -
Clinical Stds:Audiology
SLHS 559 (Summer I 2022) -
Adv Clin Stds:Audiology
SLHS 659 (Spring 2022) -
Audiology Doctoral Project
SLHS 912 (Spring 2022) -
Clinical Stds:Audiology
SLHS 559 (Spring 2022) -
Adv Clin Stds:Audiology
SLHS 659 (Fall 2021) -
Audiology Doctoral Project
SLHS 912 (Fall 2021) -
Clinical Stds:Audiology
SLHS 559 (Fall 2021) -
Lab Adv Audiologic Eval
SLHS 589L (Fall 2021)
2020-21 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Summer I 2021) -
Clinical Stds:Audiology
SLHS 559 (Summer I 2021) -
Adv Clin Stds:Audiology
SLHS 659 (Spring 2021) -
Audiology Doctoral Project
SLHS 912 (Spring 2021) -
Clinical Stds:Audiology
SLHS 559 (Spring 2021) -
Pediatric Audiology
SLHS 586 (Spring 2021) -
Adv Clin Stds:Audiology
SLHS 659 (Fall 2020) -
Audiology Doctoral Project
SLHS 912 (Fall 2020) -
Clinical Stds:Audiology
SLHS 559 (Fall 2020) -
Lab Adv Audiologic Eval
SLHS 589L (Fall 2020)
2019-20 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Spring 2020) -
Audiology Doctoral Project
SLHS 912 (Spring 2020) -
Clinical Stds:Audiology
SLHS 459 (Spring 2020) -
Clinical Stds:Audiology
SLHS 559 (Spring 2020) -
Independent Study
SLHS 599 (Spring 2020) -
Pediatric Audiology
SLHS 586 (Spring 2020) -
Adv Clin Stds:Audiology
SLHS 659 (Fall 2019) -
Audiology Doctoral Project
SLHS 912 (Fall 2019) -
Clinical Stds:Audiology
SLHS 559 (Fall 2019) -
Lab Adv Audiologic Eval
SLHS 589L (Fall 2019)
2018-19 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Summer I 2019) -
Audiology Externship
SLHS 921 (Summer I 2019) -
Clinical Stds:Audiology
SLHS 559 (Summer I 2019) -
Adv Clin Stds:Audiology
SLHS 659 (Spring 2019) -
Audiology Doctoral Project
SLHS 912 (Spring 2019) -
Clinical Stds:Audiology
SLHS 459 (Spring 2019) -
Clinical Stds:Audiology
SLHS 559 (Spring 2019) -
Pediatric Audiology
SLHS 586 (Spring 2019) -
Adv Clin Stds:Audiology
SLHS 659 (Fall 2018) -
Audiology Doctoral Project
SLHS 912 (Fall 2018) -
Clinical Stds:Audiology
SLHS 559 (Fall 2018) -
Lab Adv Audiologic Eval
SLHS 589L (Fall 2018)
2017-18 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Summer I 2018) -
Audiology Externship
SLHS 921 (Summer I 2018) -
Clinical Stds:Audiology
SLHS 559 (Summer I 2018) -
Adv Clin Stds:Audiology
SLHS 659 (Spring 2018) -
Audiology Doctoral Project
SLHS 912 (Spring 2018) -
Audiology Externship
SLHS 921 (Spring 2018) -
Clinical Issues in Audiology
SLHS 795A (Spring 2018) -
Clinical Stds:Audiology
SLHS 459 (Spring 2018) -
Clinical Stds:Audiology
SLHS 559 (Spring 2018) -
Adv Clin Stds:Audiology
SLHS 659 (Fall 2017) -
Audiology Doctoral Project
SLHS 912 (Fall 2017) -
Audiology Externship
SLHS 921 (Fall 2017) -
Clinical Stds:Audiology
SLHS 559 (Fall 2017) -
Independent Study
SLHS 499 (Fall 2017) -
Lab Adv Audiologic Eval
SLHS 589L (Fall 2017)
2016-17 Courses
-
Adv Clin Stds:Audiology
SLHS 659 (Summer I 2017) -
Clinical Stds:Audiology
SLHS 559 (Summer I 2017) -
Adv Clin Stds:Audiology
SLHS 659 (Spring 2017) -
Audiology Externship
SLHS 921 (Spring 2017) -
Clinical Issues in Audiology
SLHS 795A (Spring 2017) -
Clinical Stds:Audiology
SLHS 559 (Spring 2017) -
Independent Study
SLHS 499 (Spring 2017) -
Audiology Externship
SLHS 921 (Fall 2016) -
Clinical Stds:Audiology
SLHS 559 (Fall 2016) -
Independent Study
SLHS 499 (Fall 2016) -
Independent Study
SLHS 599 (Fall 2016) -
Lab Adv Audiologic Eval
SLHS 589L (Fall 2016) -
Preceptorship
SLHS 691 (Fall 2016)
2015-16 Courses
-
Audiology Doctoral Project
SLHS 912 (Summer I 2016) -
Adv Clin Stds:Audiology
SLHS 659 (Spring 2016) -
Audiology Doctoral Project
SLHS 912 (Spring 2016) -
Audiology Externship
SLHS 921 (Spring 2016) -
Clinical Issues in Audiology
SLHS 795A (Spring 2016) -
Clinical Stds:Audiology
SLHS 559 (Spring 2016) -
Independent Study
SLHS 499 (Spring 2016)
Scholarly Contributions
Journals/Publications
- Norrix, L. W., & Muller, C. F. (2019). Comment on Yathiraj & Vanaja (2019), “Criteria to classify children as having auditory processing disorders.. American Journal of Audiology.
- Norrix, L. W., Thein, J., & Velenovsky, D. S. (2019). The effect of Kalman-weighted averaging and artifact rejection on residual noise during auditory brainstem response testing. American Journal of Audiology.
- Norrix, L. W., & Muller, T. F. (2018). Author Response to Peck (2018), Questionable use of” Nonorganic” in estimating nonorganic hearing thresholds.. American Journal of Audiology, 27(3), 368-369.
- Norrix, L. W., & Velenovsky, D. S. (2018). Clinicians' guide to obtaining a valid auditory brainstem response to determine hearing status: Signal, noise, and cross-checks. American Journal of Audiology.
- Norrix, L. W., & Velenovsky, D. S. (2017). Unraveling the Mystery of ABR Corrections: The Need for Universal Standards.. Journal of the American Academy of Audiology, 28, 950-960.
- Norrix, L. W., Vivian, R., & Muller, T. F. (2017). ESTIMATING NONORGANIC HEARING THRESHOLDS USING BINAURAL AUDITORY STIMULI. American Journal of Audiology, 26, 486-495.
- Rubiano, V., Norrix, L. W., & Muller, T. (2017). Estimating Nonorganic Hearing Thresholds Using Binaural Auditory Stimuli.. American journal of audiology, 26(4), 486-495. doi:10.1044/2017_aja-16-0096More infoMinimum contralateral interference levels (MCILs) are used to estimate true hearing thresholds in individuals with unilateral nonorganic hearing loss. In this study, we determined MCILs and examined the correspondence of MCILs to true hearing thresholds to quantify the accuracy of this procedure..Sixteen adults with normal hearing participated. Subjects were asked to feign a unilateral hearing loss at 1.0, 2.0, and 4.0 kHz. MCILs were determined. Subjects also made lateralization judgments for simultaneously presented tones with varying interaural intensity differences..The 90% confidence intervals, calculated for the distributions, indicate that the MCIL in 90% of cases would be expected to be very close to threshold to approximately 17-19 dB poorer than the true hearing threshold. How close the MCIL is to true threshold appears to be based on the individual's response criterion..Response bias influences the MCIL and how close an MCIL is to true hearing threshold. The clinician can never know a client's response bias and therefore should use a 90% confidence interval to predict the range for the expected true threshold. On the basis of this approach, a clinician may assume that true threshold is at or as much as 19 dB better than MCIL.
- Norrix, L. W., Camarota, K., Harris, F., & Dean, J. (2016). The effects of FM and hearing aid microphone settings, FM gain, and ambient noise levels on SNR at the tympanic membrane. Journal of the American Academy of Audiology, 27, 117-125.
- Cone, B., & Norrix, L. W. (2015). Measuring the advantage of Kalman-weighted averaging for ABR hearing evaluation in infants. American journal of audiology.More infoThe purposes of this study were to 1) measure the effects of Kalman-weighted averaging methods on ABR threshold, latency and amplitude; 2) translate lab findings to the clinical setting; and 3) estimate cost savings when ABRs can be obtained in non-sedated infants.
- Norrix, L. W. (2015). Hearing Thresholds, Minimal Response Levels, and Cross-check Measures in Pediatric Audiology. American journal of audiology.More infoPediatric audiologists must identify hearing loss in a timely manner so that early intervention can be provided. In this article, the methods important for differentiating between a hearing threshold and minimal response level (MRL), important for an accurate diagnosis, are described.
- Norrix, L. W., & Anderson, A. (2015). Audiometric Thresholds: Stimulus Considerations in Sound Field and Under Earphones. American Journal of Audiology, 24(4), 487-93.More infoThis study evaluates a new stimulus, FREquency Specific Hearing assessment (FRESH) noise, to obtain hearing thresholds and reviews the potential pitfalls of using narrow band noise.
- Norrix, L. W., Van Tasell, D., Ross, J., Harris, F. P., & Dean, J. (2015). Modeling the Influence of Acoustic Coupling of Hearing Aids on FM Signal-to-Noise Ratio. American journal of audiology.More infoA model was developed to examine variables that influence signal-to-noise ratio (SNR) at the tympanic membrane (TM) when using a hearing aid (HA) and frequency modulated (FM) system. The model was used to explore how HA coupling influences SNR.
- Norrix, L. W., & Velenovsky, D. S. (2014). Auditory neuropathy spectrum disorder: a review. Journal of speech, language, and hearing research : JSLHR, 57(4), 1564-76.More infoAuditory neuropathy spectrum disorder, or ANSD, can be a confusing diagnosis to physicians, clinicians, those diagnosed, and parents of children diagnosed with the condition. The purpose of this review is to provide the reader with an understanding of the disorder, the limitations in current tools to determine site(s) of lesion, and management techniques.
- Norrix, L. W., Burgan, B., Ramirez, N., & Velenovsky, D. S. (2013). Interaural multiple frequency tympanometry measures: Clinical utility for unilateral conductive hearing loss. Journal of the American Academy of Audiology, 24(3), 231-240.More infoPMID: 23506667;Abstract: Background: Tympanometry is a routine clinical measurement of the acoustic immittance of the ear as a function of ear canal air pressure. The 226 Hz tympanogram can provide clinical evidence for conditions such as a tympanic membrane perforation, Eustachian tube dysfunction, middle ear fluid, and ossicular discontinuity. Multiple frequency tympanometry using a range of probe tone frequencies from low to high has been shown to be more sensitive than a single probe tone tympanogram in distinguishing between mass- and stiffness-related middle ear pathologies (Colletti, 1975; Funasaka et al, 1984; Van Camp et al, 1986). Purpose: In this study we obtained normative measures of middle ear resonance by using multiple probe tone frequency tympanometry. Ninety percent ranges for middle ear resonance and for interaural differences were calculated. Research Design: In a mixed design, normative data were collected from both ears of male and female adults. Study Sample: Twelve male and 12 female adults with normal hearing and normal middle ear function participated in the study. Data Collection and Analysis: Multiple frequency tympanograms were recorded with a commercially available immittance instrument (GSI Tympstar) to obtain estimates of middle ear resonant frequency (RF) using ΔB, positive tail, and negative tail methods. Data were analyzed using three-way mixed analyses of variance with gender as a between-subject variable and ear and method as within-subject variables. T-tests were performed, using the Bonferroni adjustment, to determine significant differences between means. Results: Using the positive and negative tail methods, a wide range of approximately 500 Hz was found for middle ear resonance in adults with normal hearing and normal middle ear function. The difference in RF between an individual's ears is small with 90% ranges of approximately ±200 Hz, indicating that the right ear RF should be either 200 Hz higher or lower in frequency compared to the left ear. This was true for both negative and positive tail methods. Conclusion: Ninety percent ranges were calculated to determine the difference in middle ear resonance expected between an individual's ears. These ranges can provide critical normative values for determining how pathology in an ear with a unilateral conductive hearing loss is altering the mass or stiffness characteristics of the middle ear system.
- Norrix, L. W., Trepanier, S., Atlas, M., & Kim, D. (2013). Response to Dr. Hamill: The auditory brainstem response: Latencies obtained in children while under general anesthesia. Journal of the American Academy of Audiology, 24(6), 525-528.More infoPMID: 23886430;
- Norrix, L. W., Trepanier, S., Atlas, M., & Kim, D. (2012). The auditory brainstem response: Latencies obtained in children while under general anesthesia. Journal of the American Academy of Audiology, 23(1), 57-63.More infoPMID: 22284841;PMCID: PMC3342755;Abstract: Background: The auditory brainstem response (ABR) test is frequently employed to estimate hearing sensitivity and assess the integrity of the ascending auditory system. In persons who cannot participate in conventional tests of hearing, a short-acting general anesthetic is used, recordings are obtained, and the results are compared with normative data. However, several factors (e.g., anesthesia, temperature changes) can contribute to delayed absolute and interpeak latencies, making it difficult to evaluate the integrity of the person's auditory brainstem function. Purpose: In this study, we investigated the latencies of ABR responses in children who received general anesthesia. Research Design: Between subject. Study Sample: Twelve children between the ages of 29 and 52 mo, most of whom exhibited a developmental delay but normal peripheral auditory function, comprised the anesthesia group. Twelve participants between the ages of 13 and 26 yr with normal hearing thresholds comprised the control group. Data Collection and Analysis: ABRs from a single ear from children, recorded under general anesthesia, were retrospectively analyzed and compared to those obtained from a control group with no anesthesia. ABRs were generated using 80 dB nHL rarefaction click stimuli. T-tests, corrected for alpha slippage, were employed to examine latency differences between groups. Results: There were significant delays in latencies for children evaluated under general anesthesia compared to the control group. Delays were observed forwave V and the interpeak intervals I-III, III-V, and I-V. Conclusions: Our data suggest that caution is needed in interpreting neural function from ABR data recorded while a child is under general anesthesia.
- Boliek, C., Keintz, C., Norrix, L., & Obrzut, J. (2010). Auditory-visual perception of speech in children with learning disabilities: The McGurk effect. Canadian Journal of Speech-Language Pathology and Audiology, 34(2), 124-131.More infoAbstract: This study addressed whether or not children with learning disabilities (LD) are able to integrate auditory and visual information for speech perception. The effects of vision on speech perception can be demonstrated in a stimulus mismatch situation where unconnected auditory and visual inputs are fused into a new percept that has not been presented to either modality and represents a combination of both (McGurk Effect). It was of interest to determine if the McGurk effect was present in children with LD. Twenty children with LD and 20 normal controls, matched for sex and age, participated in this study. Participants represented a younger (6-9 years of age) and an older (10-12 years of age) group. Ten adult controls (20-40 years of age) also served as participants. Control participants demonstrated that inter-modal integration became stronger with development and experience. The response patterns of the children with LD indicated that whereas these children have some ability to integrate audio-visual speech stimuli, audio-visual speech perception did not become stronger with experience and development.
- Norrix, L. W., Plante, E., Vance, R., & Boliek, C. A. (2007). Auditory-visual integration for speech by children with and without specific language impairment. Journal of speech, language, and hearing research : JSLHR, 50(6), 1639-51.More infoIt has long been known that children with specific language impairment (SLI) can demonstrate difficulty with auditory speech perception. However, speech perception can also involve the integration of both auditory and visual articulatory information.
- Norrix, L. W., Plante, E., & Vance, R. (2006). Auditory-visual speech integration by adults with and without language-learning disabilities. Journal of communication disorders, 39(1), 22-36.More infoAuditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from visual syllable). Although the identification of the auditory and congruent AV syllables was comparable for the two groups, the reaction times to identify all syllables were longer in the LLD compared to the control group. This finding is consistent with previous research demonstrating slower processing in learning disabled individuals. Adults with LLD also provided significantly fewer integration-type or McGurk responses compared with their normal peers when presented with speech tokens representing a mismatch between the auditory and visual signal. These results suggest the poor integration for auditory-visual speech previously documented in children with poor language skills also occurs in adults with LLD.
- Zampini, M. L., Norrix, L. W., & Clarke, C. M. (2002). Sensitivity to voiceless closure in the perception of Spanish and English stop consonants. Journal of the Acoustical Society of America, 112(5), 2383-2384. doi:10.1121/1.4779696More infoThe duration of voiceless closure that precedes the release of a stop consonant is a temporal cue that, like voice onset time (VOT), varies across languages. This talk will examine the interaction between VOT and voiceless closure in Spanish and English and will focus on monolingual and bilingual listeners sensitivity to changes in the duration of voiceless closure during perception. Experimental data will show that monolingual Spanish listeners mean VOT boundaries decrease as the duration of voiceless closure increases. This pattern is consistent with the finding that Spanish speakers produce voiceless stops with longer voiceless closure durations than voiced stops [K. P. Green, M. L. Zampini, and J. Magloire, J. Acoust. Soc. Am. 102, 3136 (1997)]. It will also be shown that monolingual Spanish listeners show greater sensitivity to voiceless closure than monolingual English listeners. Lastly, the impact that experience with both languages has on perception by late English–Spanish bilinguals will be discussed. It will be shown that the bilinguals under study are affected by their first language (English) while listening to tokens in isolation, but perform like monolingual Spanish listeners when listening to tokens in a Spanish sentence context.
- Green, K. P., & Norrix, L. W. (2001). Perception of /r/ and /l/ in a stop cluster: Evidence of cross-modal context effects. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 166-177.More infoPMID: 11248931;Abstract: Experiments were conducted investigating unimodal and cross-modal phonetic context effects on /r/ and /l/ identifications to test a hypothesis that context effects arise in early auditory speech processing. Experiment 1 demonstrated an influence of a preceding bilabial stop consonant on the acoustic realization of /r/ and /l/ produced within the stop clusters /ibri/ and /ibli/. In Experiment 2, members of an acoustic /iri/ to /ili/ continuum were paired with an acoustic /ibi/. These dichotic tokens were associated with an increase in "1" identification relative to the /iri/ to /ili/ continuum. In Experiment 3, the /iri/ to /ili/ tokens were dubbed onto a video of a talker saying /ibi/. This condition was associated with a reliable perceptual shift relative to an auditory-only condition in which the /iri/ to /ili/ tokens were presented by themselves, ruling out an account of these context effects as arising during early auditory processing.
- Norrix, L. W., & Keintz, C. (2000). Auditory‐visual context effects on the perception of voicing. Journal of the Acoustical Society of America, 108(5), 2482-2482. doi:10.1121/1.4743158More infoPerception of an auditory continuum can be changed by a preceding visual context [K. Green and L. Norrix, in press, JEP: HPP]. This study examines if a visual context for /l/, as in /abla/, influences the voicing distinction for an auditory /aba/‐/apa/ continuum. Voice onset time (VOT) and F1 onset frequency were measured during production of /aba/, /apa/ (singleton conditions) and /abla/, /apla/ (cluster conditions). Findings indicate longer VOTs in /apla/ compared to /apa/ and lower F1 onset frequencies for the cluster compared to singleton conditions. An auditory‐only (AO) continuum (/aba/–/apa/) was created and presented to listeners for identification. In an auditory‐visual (AV) condition each auditory token was paired with a face saying /abla/ and presented for identification. If subjects use knowledge about coarticulation and its effects on VOT during production, then they should make more /b/ responses in the AV compared to AO condition. In contrast, if they recover F1 information from the visual ...
- Oved, I., Norrix, L. W., & Garrett, M. F. (2000). Identity priming using McGurk stimuli as primes. Journal of the Acoustical Society of America, 108(5), 2482-2482. doi:10.1121/1.4743159More infoResearch indicates that auditory and visual information is integrated during the perception of speech. Conflicting auditory and visual stimuli can result in an illusory experience known as the McGurk effect (e.g., auditory /bav/ dubbed onto a face saying /gav/ results in a perception of ‘‘dav’’). This study used a priming paradigm to investigate whether a phonemic representation for the auditory portion of a McGurk stimulus is active after the illusory phoneme is experienced. Subjects were given (nonword) prime‐target conditions, including: (1) McGurk (e.g., Prime auditory /bav/ + visual /gav/ = ‘‘dav;’’ Target auditory /bav/); (2) Incongruent (e.g., Prime auditory‐visual /mav/, Target auditory /bav/); (3) Identity (e.g., Prime auditory‐visual /yav/, Target auditory /yav/). Results show that mean reaction times to repeat targets were fastest in the identity condition. Response times for the McGurk and incongruent conditions were indistinguishable from one another and significantly slower than the identity condition. This finding suggests that once the auditory and visual information is combined and a phonemic representation is made, the actual auditory signal is no longer available to affect processing of the target. [This work is based on ideas developed by the late Kerry P. Green and supported by NSF.]
- Rosenblum, L. D., Oved, I., & Norrix, L. W. (2000). Auditory–visual context effects in a speech and nonspeech condition. Journal of the Acoustical Society of America, 107(5), 2887-2887. doi:10.1121/1.428730More infoPerception of an auditory continuum can be changed by a preceding visual context [Norrix and Green, in Proc. AVSP’99, Santa Cruz, CA, August, 1999]. This study asked whether this perceptual shift requires that listeners experience the stimuli as speech. A sine‐wave continuum (/ara–ala/) was presented to three groups of trained participants for identification (auditory‐only condition). Tokens from the continuum were also paired with a point‐light display of a talker saying /aba/ (auditory–visual condition) and presented to the same listeners for identification. Observers in group one (speech) identified the sounds as containing an /r/ or /l/ and reported after the experiment that they heard the sounds as speech. Group two, also instructed to identify the speech sounds as containing /r/ or /l/, reported they did not hear the sounds as speech. Group three (nonspeech) was instructed to identify the environmental sounds as most similar to the ‘‘first’’ or ‘‘last’’ sound of the continuum. Results indicated a reliable shift for the auditory–visual compared to the auditory‐only identification function only in the speech group, suggesting that auditory–visual context effects might depend on observers interpreting the stimuli as speech. [Work supported by NSF Grant ♯SBR9809013 awarded to Kerry P. Green (deceased).]
- Green, K. P., & Norrix, L. W. (1997). Acoustic cues to place of articulation and the McGurk effect: The role of release bursts, aspiration, and formant transitions. Journal of Speech, Language, and Hearing Research, 40(3), 646-665.More infoPMID: 9210121;Abstract: The McGurk effect demonstrates that the perceived place of articulation of an auditory consonant (such as /bi/) can be influenced by the simultaneous presentation of a videotape of a talker saying a conflicting consonant such as /gi/. Usually, such a presentation is perceived by observers as 'di' or 'δi' (known as fusion responses). The reverse pairing (auditory /gi/ paired with a visual /bi/) results in 'bgi' percepts. These are known as combination responses. In the current study, three experiments examined how acoustic information about place of articulation contained within the release bursts, aspiration, and voiced formants and transitions of a consonant contribute to the McGurk effect. In the first experiment, the release bursts and aspiration were deleted from the acoustic signal. This manipulation resulted in a smaller impact on McGurk 'fusion' tokens relative to the McGurk 'combination' tokens. This asymmetry may be related to the perceptual salience of the release bursts and aspiration for velar compared to the bilabial tokens used in this experiment and their importance for obtaining the combination percept. In Experiment 2, the release bursts and aspiration were increased in amplitude. Results revealed either no effect or a stronger. McGurk effect for the manipulated tokens than for the intact tokens. This finding suggests that the McGurk effect for fusion tokens does not occur simply because the release bursts and aspiration are weak. In Experiment 3, low-pass filtering the second and higher formants and transitions was associated with the largest overall impact on the McGurk effect. This suggests that dynamic information contained within these formants is of primary importance in obtaining the McGurk effect. These cues are, however, context-dependent and vary as a function of talker and vowel context.
- Norrix, L. W., & Glattke, T. J. (1996). Distortion product otoacoustic emissions created through the interaction of spontaneous otoacoustic emissions and externally generated tones. Journal of the Acoustical Society of America, 100(2 I), 945-955.More infoPMID: 8759948;Abstract: Spontaneous otoacoustic emissions (SOAEs) and external tones (XTs) were used as primaries f2 and f1, respectively (frequency of f2>f1) to create 2f1-f2 distortion product otoacoustic emissions (DPOAEs). Amplitude and frequency of the SOAEs, XTs, and DPOAEs were recorded by placing a sensitive microphone in the ear canal and extracted using fast Fourier transform analysis. XTs were presented to ten ears at SOAE/f1 ratios between 1.08 and 1.22. XTs were incremented in 5-dB steps and ranged from levels equal to the initial SOAE amplitudes to levels at which the SOAEs and DPOAEs were suppressed into the noise floor. Results indicated that DPOAE amplitudes and SOAE suppression characteristics were idiosyncratic. Despite the variability, the following trends were noted: (1) at larger frequency ratios, DPOAE generation and SOAE suppression were associated with greater XT levels; (2) DPOAE growth functions were characterized by slopes less than 1 dB/dB, a maximum, rollover and disappearance into the noise floor with increasing XT levels; (3) maximum amplitude DPOAEs were observed at frequencies approximately one-half octave lower than the SOAE (f2); (4) the presence of DPOAEs was associated with SOAE suppression; (5) the most common SOAE frequency shift, in the presence of XT stimulation, was a shift to a higher frequency.
- Norrix, L. W., & Glattke, T. J. (1996). Multichannel waveforms and topographic mapping of the auditory brainstem response under common stimulus and recording conditions. Journal of Communication Disorders, 29(3), 157-182.More infoPMID: 8799852;Abstract: Topographic representation of brain electrical activity may be employed to provide information about excitation patterns and symmetry of responses to sensory stimulation. The present investigation describes the waveforms and topographic distribution of the auditory brainstem response (ABR) in normal- hearing and neurologically-normal women during several stimulus and recording conditions using a multiple electrode army. Acoustic stimuli were rarefaction clicks presented monaurally at five intensities. Latencies and amplitudes for the sagittal and coronal electrodes as well as the topographies of waves I, III, and V were described. Wave I, in an ipsilateral inverting reference condition, exhibited smallest amplitudes at the temporal electrode ipsilateral to the stimulated ear and maximum voltage at electrode sites contralateral to the stimulated ear. Amplitude and latency patterns of wave III were peculiar to level of stimulation, ear of stimulation and selection of inverting electrode site. Wave III maximum voltage was detected at fronto- central scalp areas contralateral to the stimulated ear. Wave V latencies and amplitudes varied across electrode site and inverting electrode position. Its topography revealed maximum positive voltage at fronto-central electrodes, slightly asymmetric in some instances. Analysis of amplitude and topographic patterns of ABR components may provide useful information to supplement that provided by conventional displays of waveforms.
- Norrix, L. W., & Green, K. P. (1996). Auditory‐visual context effects on the perception of /r/ and /l/ in a stop cluster.. Journal of the Acoustical Society of America, 99(4), 2591-2603. doi:10.1121/1.415263More infoContext effects in speech perception are thought to reflect knowledge about coarticulatory influences from the surrounding phonetic environment. This study investigated if such effects occur when the context is specified in the visual modality and segmental information in the auditory modality. In the first experiment, two continua were synthesized: one varying from /iri–ili/ and the other from /ibri–ibli/. When presented to listeners for identification as /r/ or /l/, there was a significant shift in the category boundary between the two continua. In a second experiment, the /iri–ili/ tokens were paired with visual tokens of a talker saying /ibi/. When presented in an auditory‐visual (AV) condition, these tokens were perceived as ranging from /ibri/ to /ibli/. Observers also identified the /iri–ili/ tokens in a separate auditory‐only (AO) condition. Results indicated a similar shift in the /r–l/ boundary between the AO and AV conditions. Analysis of /r/ and /l/ productions revealed that the perceptual adjustments were consistent with the acoustic consequences of articulating /r/ and /l/ in a stop cluster. The findings suggest that the perceptual adjustment reflects cross‐modal knowledge of the coarticulatory effects of a bilabial consonant on the acoustic realization of /r/ and /l/. [Work supported by NIDCD, NIH.]
- Norrix, L. W., & Green, K. P. (1996). Cross‐modal context effects on the perception of /r/ and /l/ in a speech and nonspeech mode. Journal of the Acoustical Society of America, 100(4), 2571-2571. doi:10.1121/1.417403More infoNorrix and Green [J. Acoust. Soc. Am. 99, 2591–2592 (1996)] provided evidence for cross‐modal context effects on the perception of /r/ and /l/ in a stop cluster. Tokens from a synthetic /iri–ili/ continuum were dubbed onto a visual /ibi/. When presented in an auditory‐visual (AV) condition, the tokens were perceived as ranging from /ibri/ to /ibli/. Results indicated a reliable shift in the AV condition relative to an auditory‐only (AO) condition. This shift was in accord with acoustic consequences of articulating /r/ and /l/ in a stop cluster. In the current study, sine‐wave analogs of the /iri–ili/ tokens were constructed and presented to two groups of observers in an AO and AV condition. Group One was told they would hear schematic speech sounds and instructed to identify what they heard as /r/ or /l/. Group Two made up their own criteria for classifying the tokens as nonspeech sounds. Results indicated a reliable shift in the /r–l/ boundary between the AO and AV conditions for the speech group only an...
- Norrix, L. W., DeYoung, D. W., Krausman, P. R., Etchberger, R. C., & Glattke, T. J. (1995). Conductive hearing loss in bighorn sheep.. Journal of wildlife diseases, 31(2), 223-227.More infoPMID: 8583641;Abstract: In January 1993 we simulated a conductive hearing loss in three Mexican bighorn sheep (Ovis canadensis mexicana) by placing bone wax or saline solution in their ear canals. Our objective was to test whether lesions of the external auditory canal caused by psoroptic mites (Psoroptes ovis) may lead to conductive hearing loss in bighorn sheep. We assessed the effects of these manipulations using the auditory brainstem response test. Placing saline solution in the external auditory canal, which loads the tympanic membrane, had a more dramatic effect on the auditory brainstem response than did bone wax. We propose that decreased hearing sensitivity or alterations in resonance characteristics of the external auditory canal, due to psoroptic scabies lesions, may make bighorn sheep more susceptible to predation.
Presentations
- Muller, T. F., Marrone, N. L., Norrix, L. W., Le, G., & Wong, A. A. (2015, October). Toward an Affordable Hearing Aid Option for Adult Arizonans with Limited Income. Meeting of the Arizona Commission for the Deaf and Hard of Hearing. Tucson, Arizona: Arizona Commission for the Deaf and Hard of Hearing.
- Muller, T. F., Marrone, N. L., Norrix, L. W., Le, G., & Wong, A. A. (2015, October). Toward an Affordable Hearing Aid Option for. Meeting of the Arizona Commission for the Deaf and Hard of Hearing. Tucson, Arizona: Arizona Commission for the Deaf and Hard of Hearing.
- Norrix, L. W., Cone, B., & Faux, C. (2015, spring). Auditory Processing Disorders. Tucson Unified School District Presentation. Tucson High School: Tucson Unified School District.
- Norrix, L. W., Cone, B., Faux, C., & Velenovsky, D. (2015, spring). Auditory Neuropathy Spectrum Disorder. ArSHA. Phoenix, AZ: AZ Speech Language Hearing Association.
- Norrix, L. W., Denny, N., & Mast, W. (2015, fall). Classroom Acoustics. Inservice for Pascua Yaqui Teachers. Tucson Arizona: Pascua Yaqui Head Start.
- Norrix, L. W., Velenovsky, D., & Faux, C. (2015, spring). Auditory Neuropathy Spectrum Disorder. University of AZ Colloquium.
- Adamovich, S. L., Hopwood, L., Le, G., Marrone, N. L., Muller, T. F., Norrix, L. W., Rubiano, V., & Wong, A. A. (2014, Fall (October)). The 2014 Arizona Affordable Hearing Aid Task Force. Hearing Assistive Devices. Tucson: University of Arizona, Department of Speech, Language, and Hearing Sciences.More infoAdamovich, S., Hopwood, L., Le, G., Marrone, N., Muller, T., Norrix, L., Rubiano, V., Wong, A. (2014, Oct. 3). The 2014 Arizona Affordable Hearing Aid Task Force. Podium presentation at the 2014 University of Arizona, Department of Speech, Language, and Hearing Sciences “Hearing Assistive Devices” Conference, Tucson, AZ.
- Adamovich, S. L., Hopwood, L., Le, G., Marrone, N. L., Muller, T. F., Norrix, L. W., Rubiano, V., & Wong, A. A. (2014, Fall (October)). The 2014 Arizona Affordable Hearing Aid Task Force. SLHS Colloquium. Tucson: University of Arizona, Department of Speech, Language, and Hearing Sciences.More infoAdamovich, S., Hopwood, L., Le, G., Marrone, N., Muller, T., Norrix, L., Rubiano, V., Wong, A. (2014, Oct. 3). The 2014 Arizona Affordable Hearing Aid Task Force. Podium presentation at the 2014 University of Arizona, Department of Speech, Language, and Hearing Sciences “Hearing Assistive Devices” Conference, Tucson, AZ.
- Norrix, L. W., Camarato, K., Ross, J., Dean, J., & Harris, F. (2011, October). FM and Open Fittings. Oticon Pediatric Conference. San Antonio, TX: Oticon.
Poster Presentations
- Thein, J., Norrix, L. W., Vasey, J., & Velenovsky, D. S. (2018, March). Residual noise using Intelligent Hearing and Vivosonic Integrity ABR systems. American Auditory Society. Scottsdale, AZ.
- Le, G., Le, G., Marrone, N. L., Marrone, N. L., Wong, A. A., Wong, A. A., Muller, T. F., Muller, T. F., Norrix, L. W., Adamovich, S. L., Rubiano, V., Norrix, L. W., Hopwood, L., Hopwood, L., & Rubiano, V. (2015, March). Provider Perspectives on the Accessibility of Hearing Healthcare in Arizona. Meeting of the American Auditory Society. Phoenix, Arizona: American Auditory Society.
Others
- Muller, T. F., Muller, T. F., Marrone, N. L., Marrone, N. L., Norrix, L. W., Norrix, L. W., Le, G., Le, G., Wong, A. A., Wong, A. A., Adamovich, S. L., & Adamovich, S. L. (2015, October). Toward an Affordable Hearing Aid Option for Adult Arizonans with Limited Income. Report Commissioned by and Presented to the Arizona Commission for the Deaf and Hard of Hearing (ACDHH).