The latest medical research on Audiology

The research magnet gathers the latest research from around the web, based on your specialty area. Below you will find a sample of some of the most recent articles from reputable medical journals about audiology gathered by our medical AI research bot.

The selection below is filtered by medical specialty. Registered users get access to the Plexa Intelligent Filtering System that personalises your dashboard to display only content that is relevant to you.

Want more personalised results?

Request Access

Exploring Interactive Songs as a Vocabulary Input Context.

Speech Language Path

Interactive songs are a common shared activity for many families and within early childhood classrooms. These activities have the potential to be rich sources of vocabulary input for children with and without language impairments. However, little information is known about the how caregivers currently provide input for different types of vocabulary during these activities. The purpose of this research note is to provide preliminary information on how caregivers provide input related to verbs within an interactive song activity.

Observations of caregivers engaging in song activities with their child were collected. The gestures used during the interactions were coded.

The results show that, when given examples, caregivers provide gestural input both frequently and consistently.

Clinical implications and future directions for exploring songs as an intervention context are discussed.

The Influence of Asymmetric Hearing Loss on Peripheral and Central Auditory Processing Abilities in Patients With Vestibular Schwannoma.

Ear and Hearing

Asymmetric or unilateral hearing loss (AHL) may cause irreversible changes in the processing of acoustic signals in the auditory system. We aim to provide a comprehensive view of the auditory processing abilities for subjects with acquired AHL, and to examine the influence of AHL on speech perception under difficult conditions, and on auditory temporal and intensity processing.

We examined peripheral and central auditory functions for 25 subjects with AHL resulting from vestibular schwannoma, and compared them to those from 24 normal-hearing controls that were matched with the AHL subjects in mean age and hearing thresholds in the healthy ear. Besides the basic hearing threshold assessment, the tests comprised the detection of tones and gaps in a continuous noise, comprehension of speech in babble noise, binaural interactions, difference limen of intensity, and detection of frequency modulation. For the AHL subjects, the selected tests were performed separately for the healthy and diseased ear.

We observed that binaural speech comprehension, gap detection, and frequency modulation detection abilities were dominated by the healthy ear and were comparable for both groups. The AHL subjects were less sensitive to interaural delays, however, they exhibited a higher sensitivity to sound level, as indicated by lower difference limen of intensity and a higher sensitivity to interaural intensity difference. Correlations between the individual test scores indicated that speech comprehension by the AHL subjects was associated with different auditory processing mechanisms than for the control subjects.

The data suggest that AHL influences both peripheral and central auditory processing abilities and that speech comprehension under difficult conditions relies on different mechanisms for the AHL subjects than for normal-hearing controls.

The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users.

Ear and Hearing

Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study (Taitlebaum-Swead et al. 2022; postlingual CI).

Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI).

When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration.

Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.

Meaningful life changes following hearing aid use: a qualitative user perspective.

International Journal of Epidemiology

This study aimed to explore meaningful life changes due to hearing aid use in adult users.

A cross-sectional survey design was used with open-ended questions analysed using inductive qualitative content analysis.

US-based adult hearing aid users (n = 653) from the Hearing Tracker website community and Lexie Hearing database.

Participants had a mean age of 65.4 years (13.6 SD), including 61.2% males, 38.3% females (0.5% other). Analysis of 2122 meaning units from responses identified two broad domains: 'meaningful benefits' (n = 1709; 80.5%) and 'remaining difficulties' (n = 413; 19.5%). The meaningful benefits domain included five categories (27 sub-categories): (a) psychosocial benefits, (b) improvements in hearing, (c) personal benefits, (d) hearing aid features and connectivity, and (e) situational benefits. Participants reported enhanced relationships and improved occupational functioning as key benefits. The remaining difficulties domain contained four categories (25 sub-categories): (a) hearing aid limitations, (b) hearing and communication issues, (c) situational difficulties, and (d) personal issues. Notable difficulties included hearing aid design issues and challenges in noisy environments.

Hearing aid users reported diverse benefits and persistent challenges related to device use, illustrating the complexity of their lived experiences. These findings can inform empathetic, effective rehabilitation strategies and user-centric hearing aid technologies.

Transducer Variability in Speech-in-Noise Testing: Considerations Related to Stimulus Bandwidth.

Am J Audiology

Clinical audiologists typically assume that headphones and insert phones will produce comparable results when they are used to conduct speech-in-noise or other audiological tests; however, this may not always be the case. Here, we show that there are significant differences in the scores that previous studies have reported for headphone and insert-phone transducers on the Words-in-Noise (WIN) Test, and discuss the possibility that the variations in high-frequency output that are allowable under the speech source specifications of American National Standards Institute S3.6 might be contributing to transducer-dependent differences in performance for the WIN and other tests that are presented through the auxiliary input channels of clinical audiometers.

A literature review was conducted to identify articles that reported WIN Test results for both listeners with normal hearing and with hearing impairment and specified the type of transducer (insert or TDH-50) used for the data collection.

Among the 19 included studies, participants with normal hearing using inserts exhibited systematically worse WIN Test scores compared to those using TDH-50 headphones, while participants with hearing loss showed comparable average scores across transducer types.

The results highlight the importance of considering transducer type when interpreting WIN Test outcomes, particularly when comparing to normative scores obtained from individuals with normal hearing. Although further research is needed to elucidate the underlying mechanisms driving differences in test performance across transducer types, these findings underscore the need for standardized test administration protocols and careful documentation of transducer type when administering speech-in-noise tests for clinical or research applications.

Chronic Electro-Acoustic Stimulation May Interfere With Electric Threshold Recovery After Cochlear Implantation in the Aged Guinea Pig.

Ear and Hearing

Electro-acoustic stimulation (EAS) combines electric stimulation via a cochlear implant (CI) with residual low-frequency acoustic hearing, with benefits for music appreciation and speech perception in noise. However, many EAS CI users lose residual acoustic hearing, reducing this benefit. The main objectives of this study were to determine whether chronic EAS leads to more hearing loss compared with CI surgery alone in an aged guinea pig model, and to assess the relationship of any hearing loss to histology measures. Conversely, it is also important to understand factors impacting efficacy of electric stimulation. If one contributor to CI-induced hearing loss is damage to the auditory nerve, both acoustic and electric thresholds will be affected. Excitotoxicity from EAS may also affect electric thresholds, while electric stimulation is osteogenic and may increase electrode impedances. Hence, secondary objectives were to assess how electric thresholds are related to the amount of residual hearing loss after CI surgery, and how EAS affects electric thresholds and impedances over time.

Two groups of guinea pigs, aged 9 to 21 months, were implanted with a CI in the left ear. Preoperatively, the animals had a range of hearing losses, as expected for an aged cohort. At 4 weeks after surgery, the EAS group (n = 5) received chronic EAS for 8 hours a day, 5 days a week, for 20 weeks via a tether system that allowed for free movement during stimulation. The nonstimulated group (NS; n = 6) received no EAS over the same timeframe. Auditory brainstem responses (ABRs) and electrically evoked ABRs (EABRs) were recorded at 3 to 4 week intervals to assess changes in acoustic and electric thresholds over time. At 24 weeks after surgery, cochlear tissue was harvested for histological evaluation, only analyzing animals without electrode extrusions (n = 4 per ear).

Cochlear implantation led to an immediate worsening of ABR thresholds peaking between 3 and 5 weeks after surgery and then recovering and stabilizing by 5 and 8 weeks. Significantly greater ABR threshold shifts were seen in the implanted ears compared with contralateral, non-implanted control ears after surgery. After EAS and termination, no significant additional ABR threshold shifts were seen in the EAS group compared with the NS group. A surprising finding was that NS animals had significantly greater recovery in EABR thresholds over time, with decreases (improvements) of -51.8 ± 33.0 and -39.0 ± 37.3 c.u. at 12 and 24 weeks, respectively, compared with EAS animals with EABR threshold increases (worsening) of +1.0 ± 25.6 and 12.8 ± 44.3 c.u. at 12 and 24 weeks. Impedance changes over time did not differ significantly between groups. After exclusion of cases with electrode extrusion or significant trauma, no significant correlations were seen between ABR and EABR thresholds, or between ABR thresholds with histology measures of inner/outer hair cell counts, synaptic ribbon counts, stria vascularis capillary diameters, or spiral ganglion cell density.

The findings do not indicate that EAS significantly disrupts acoustic hearing, although the small sample size limits this interpretation. No evidence of associations between hair cell, synaptic ribbon, spiral ganglion cell, or stria vascularis with hearing loss after cochlear implantation was seen when surgical trauma is minimized. In cases of major trauma, both acoustic thresholds and electric thresholds were elevated, which may explain why CI-only outcomes are often better when trauma and hearing loss are minimized. Surprisingly, chronic EAS (or electric stimulation alone) may negatively impact electric thresholds, possibly by prevention of recovery of the auditory nerve after CI surgery. More research is needed to confirm the potentially negative impact of chronic EAS on electric threshold recovery.

Extending Double Empathy: Effects of Neurotype-Matching on Communication Success in an Expository Context.

Speech Language Path

Milton's theory of double empathy posits that the difference in communication styles between people of different neurotypes contributes to mutual misunderstandings. The current quasi-experimental study seeks to expand on research indicating that matched neurotype pairs tend to communicate more effectively than mixed neurotype pairs by examining communication across and within neurotypes in an expository language context.

Thirty autistic adults and 28 nonautistic adults were paired in either a matched neurotype or mixed neurotype condition. The pairs' interactions involved giving and listening to directions to draw an image. Interactions were recorded, transcribed, and coded for communication accuracy, rate, and clarity. Participants also completed a survey about the rapport they experienced in the interaction.

Matched neurotype pairs were significantly more accurate in their communication than mixed neurotype pairs. Rate was fastest among mixed neurotype pairs, but clarity did not differ significantly across conditions. Matched autistic pairs reported significantly lower rapport than other pairs.

This finding lends further support to the neurodiversity model by demonstrating that autistic communication is not inherently deficient. Further research is necessary to investigate a variety of influences on rate, clarity, and rapport development. Clinical implications include considerations for neurodiversity-affirming communication supports for expository contexts such as classroom directions or workplace instructions.

Interplay of Semantic Plausibility and Word Order Canonicity in Sentence Processing of People With Aphasia Using a Verb-Final Language.

Speech Language Path

The Western Aphasia Battery is widely used to assess people with aphasia (PWA). Sequential Commands (SC) is one of the most challenging subtests for PWA. However, test items confound linguistic factors that make sentences difficult for PWA. The current study systematically manipulated semantic plausibility and word order in sentences like those in SC to examine how these factors affect comprehension deficits in aphasia.

Fifty Korean speakers (25 PWA and 25 controls) completed a sentence-picture matching task that manipulated word order (canonical vs. noncanonical) and semantic plausibility (plausible vs. less plausible). Analyses focused on accuracy and aimed to identify sentence types that best discriminate the groups. Additionally, we explored which sentence type serves as the best predictor of aphasia severity.

PWA demonstrated greater difficulties in processing less plausible sentences than plausible ones compared to the controls. Across the groups, noncanonical and less plausible sentences elicited lower accuracy than canonical and plausible sentences. Notably, the accuracy of PWA and control groups differed in noncanonical and less plausible sentences. Additionally, aphasia severity significantly correlated with less plausible sentences.

Even in languages with flexible word order, PWA find it challenging to process sentences with noncanonical syntactic structures and less plausible semantic roles.

Exploring cross-linguistic differences in parental input and their associations with child expressive language in ASD: Bulgarian versus English comparison.

Int J Lang

Parental input plays a central role in typical language acquisition and development. In autism spectrum disorder (ASD), characterized by social communicative and language difficulties, parental input presents an important avenue for investigation as a target for intervention. A rich body of literature has identified which aspects of grammatical complexity and lexical diversity are most associated with child language ability in both typical development and autism. Yet, the majority of these studies are conducted with English-speaking children, thus potentially overlooking nuances in parental input derived from cross-linguistic variation.

To examine the differences in verbal parental input to Bulgarian- and English-speaking children with ASD. To examine whether aspects of verbal parental input found to be concurrent predictors of English-speaking children's expressive language ability are also predictors of the expressive language of Bulgarian-speaking children with ASD.

We compared parental input to Bulgarian-speaking (N = 37; 2;7-9;10 years) and English-speaking (N = 37; 1;8-4;9 years) children with ASD matched on expressive language. Parent-child interactions were collected during free play with developmentally appropriate toys. These interactions were transcribed, and key measures of parental input were extracted.

English-speaking parents produced more word tokens and word types than Bulgarian-speaking parents. However, Bulgarian parents produced more verbs in relation to nouns and used more statements and exclamations but asked fewer questions than English-speaking parents. In addition, child age and parents' use of questions were significant concurrent predictors of child expressive vocabulary.

What is already known on the subject A rich body of literature has identified the specific aspects of grammatical complexity, lexical diversity, and question-asking that are concurrently and longitudinally associated with the language ability of children with typical development and of children with ASD. Yet, the majority of these studies are conducted with English-speaking children. What this paper adds to the existing knowledge The present study finds that there are specific differences in verbal parental input to Bulgarian- and English-speaking children with autism in terms of lexical composition and question-asking. Bulgarian parents used more verbs than nouns, and the opposite pattern was found for English-speaking parents. In addition, Bulgarian parents asked fewer questions but used more statements and exclamations. Nevertheless, parental question use was significantly correlated with children's language ability across both groups, suggesting that question-asking should be further examined as a potential target for parent-mediated language interventions for Bulgarian children with autism. What are the potential or actual clinical implications of this work? Most language and social communication interventions for autism are designed and piloted with English-speaking children. These interventions are often simply translated and used in different countries, with different populations and in different contexts. However, considering that one of the defining characteristics of autism is language difficulty, more studies should examine (1) how these language difficulties manifest in languages other than English, and (2) what characterizes verbal parental input in these other contexts. Such research investigations should inform future language and social communication interventions. The present study emphasizes the cross-linguistic differences between Bulgarian- and English-speaking parents' verbal input to their children with autism.

Extended High-Frequency Thresholds: Associations With Demographic and Risk Factors, Cognitive Ability, and Hearing Outcomes in Middle-Aged and Older Adults.

Ear and Hearing

This study had two objectives: to examine associations between extended high-frequency (EHF) thresholds, demographic factors (age, sex, race/ethnicity), risk factors (cardiovascular, smoking, noise exposure, occupation), and cognitive abilities; and to determine variance explained by EHF thresholds for speech perception in noise, self-rated workload/effort, and self-reported hearing difficulties.

This study was a retrospective analysis of a data set from the MUSC Longitudinal Cohort Study of Age-related Hearing Loss. Data from 347 middle-aged adults (45 to 64 years) and 694 older adults (≥ 65 years) were analyzed for this study. Speech perception was quantified using low-context Speech Perception In Noise (SPIN) sentences. Self-rated workload/effort was measured using the effort prompt from the National Aeronautics and Space Administration-Task Load Index. Self-reported hearing difficulty was assessed using the Hearing Handicap Inventory for the Elderly/Adults. The Wisconsin Card Sorting Task and the Stroop Neuropsychological Screening Test were used to assess selected cognitive abilities. Pure-tone averages representing conventional and EHF thresholds between 9 and 12 kHz (PTA(9 - 12 kHz)) were utilized in simple linear regression analyses to examine relationships between thresholds and demographic and risk factors or in linear regression models to assess the contributions of PTA(9 - 12 kHz) to the variance among the three outcomes of interest. Further analyses were performed on a subset of individuals with thresholds ≤ 25 dB HL at all conventional frequencies to control for the influence of hearing loss on the association between PTA(9 - 12 kHz) and outcome measures.

PTA(9 - 12 kHz) was higher in males than females, and was higher in White participants than in racial Minority participants. Linear regression models showed the associations between cardiovascular risk factors and PTA(9 - 12 kHz) were not statistically significant. Older adults who reported a history of noise exposure had higher PTA(9 - 12 kHz) than those without a history, while associations between noise history and PTA(9 - 12 kHz) did not reach statistical significance for middle-aged participants. Linear models adjusting for age, sex, race and noise history showed that higher PTA(9 - 12 kHz) was associated with greater self-perceived hearing difficulty and poorer speech recognition scores in noise for both middle-aged and older participants. Workload/effort was significantly related to PTA(9 - 12 kHz) for middle-aged, but not older, participants, while cognitive task performance was correlated with PTA(9 - 12 kHz) only for older participants. In general, PTA(9 - 12 kHz)did not account for additional variance in outcome measures as compared to conventional pure-tone thresholds, with the exception of self-reported hearing difficulties in older participants. Linear models adjusting for age and accounting for subject-level correlations in the subset analyses revealed no association between PTA(9 - 12 kHz)and outcomes of interest.

EHF thresholds show age-, sex-, and race-related patterns of elevation that are similar to what is observed for conventional thresholds. The current results support the need for more research to determine the utility of adding EHF thresholds to routine audiometric assessment with middle-aged and older adults.

Long-Term Outcomes of Cochlear Implantation in Usher Syndrome.

Ear and Hearing

Usher syndrome (USH), characterized by bilateral sensorineural hearing loss (SNHL) and retinitis pigmentosa (RP), prompts increased reliance on hearing due to progressive visual deterioration. It can be categorized into three subtypes: USH type 1 (USH1), characterized by severe to profound congenital SNHL, childhood-onset RP, and vestibular areflexia; USH type 2 (USH2), presenting with moderate to severe progressive SNHL and RP onset in the second decade, with or without vestibular dysfunction; and USH type 3 (USH3), featuring variable progressive SNHL beginning in childhood, variable RP onset, and diverse vestibular function. Previous studies evaluating cochlear implant (CI) outcomes in individuals with USH used varying or short follow-up durations, while others did not evaluate outcomes for each subtype separately. This study evaluates long-term CI performance in subjects with USH, at both short-term and long-term, considering each subtype separately.

This retrospective, observational cohort study identified 36 CI recipients (53 ears) who were categorized into four different groups: early-implanted USH1 (first CI at ≤7 years of age), late-implanted USH1 (first CI at ≥8 years of age), USH2 and USH3. Phoneme scores at 65 dB SPL with CI were evaluated at 1 year, ≥2 years (mid-term), and ≥5 years postimplantation (long-term). Each subtype was analyzed separately due to the significant variability in phenotype observed among the three subtypes.

Early-implanted USH1-subjects (N = 23 ears) achieved excellent long-term phoneme scores (100% [interquartile ranges {IQR} = 95 to 100]), with younger age at implantation significantly correlating with better CI outcomes. Simultaneously implanted subjects had significantly better outcomes than sequentially implanted subjects (p = 0.028). Late-implanted USH1 subjects (N = 3 ears) used CI solely for sound detection and showed a mean phoneme discrimination score of 12% (IQR = 0 to 12), while still expressing satisfaction with ambient sound detection. In the USH2 group (N = 23 ears), a long-term mean phoneme score of 85% (IQR = 81 to 95) was found. Better outcomes were associated with younger age at implantation and higher preimplantation speech perception scores. USH3-subjects (N = 7 ears) achieved a mean postimplantation phoneme score of 71% (IQR = 45 to 91).

This study is currently one of the largest and most comprehensive studies evaluating CI outcomes in individuals with USH, demonstrating that overall, individuals with USH benefit from CI at both short- and long-term follow-up. Due to the considerable variability in phenotype observed among the three subtypes, each subtype was analyzed separately, resulting in smaller sample sizes. For USH1 subjects, optimal CI outcomes are expected with early simultaneous bilateral implantation. Late implantation in USH1 provides signaling function, but achieved speech recognition is insufficient for oral communication. In USH2 and USH3, favorable CI outcomes are expected, especially if individuals exhibit sufficient speech recognition with hearing aids and receive ample auditory stimulation preimplantation. Early implantation is recommended for USH2, given the progressive nature of hearing loss and concomitant severe visual impairment. In comparison with USH2, predicting outcomes in USH3 remains challenging due to the variability found. Counseling for USH2 and USH3 should highlight early implantation benefits and encourage hearing aid use.

Assessment of communicative competence in adult patients with minimum response in intensive care units: A scoping review.

Int J Lang

Few formal instruments exist to assess the communicative competence of patients hospitalized in intensive care units (ICUs). This can limit interventions by health professionals.

To map the categories and instruments for assessing the communicative competence of adult patients with minimal response in ICUs.

A scoping review was carried out following the Joanna Briggs Institute protocol between February and March 2022 and using the MEDLINE (PubMed), Scopus, Scielo, Business Source Complete (via EBSCOhost), Academic Search Complete (via EBSCOhost) and Web of Science databases, in Portuguese, English and Spanish.

Eight studies met the inclusion criteria. The different communication and pain assessment protocols covered awareness, cognition, sensory capacity, motor capacity, language, speech and literacy.

What is already known on the subject Patients in ICUs are subject to various forms of treatment and continuous and intensive monitoring, compromising their capacity to communicate and actively participate (e.g., sharing symptoms and making decisions). Although there is some awareness of their disadvantage in such a regard, few protocols of assessment of communicative competence have been adapted to patients with a minimum response. What this paper adds to the existing knowledge The present review highlights different protocols for the assessment of communication and pain. They include the following categories: awareness, sensory capacity, auditory and visual acuity, positioning and motor capacity, language, speech, and literacy. The review offers a starting point for the construction of a formal assessment instrument encompassing the aforementioned categories, along with duly validated guidelines for its application. What are the potential or actual clinical implications of this work? Our formal assessment instrument takes into account the need to adapt to different patient profiles. It is hoped that it will provide speech therapists and other health professionals with the information required to implement an AACS in which patients participate actively.