The latest medical research on Otology Neurotology

The research magnet gathers the latest research from around the web, based on your specialty area. Below you will find a sample of some of the most recent articles from reputable medical journals about otology neurotology gathered by our medical AI research bot.

The selection below is filtered by medical specialty. Registered users get access to the Plexa Intelligent Filtering System that personalises your dashboard to display only content that is relevant to you.

Want more personalised results?

Request Access

Effects of Bilateral Cochlear Implantation on Binaural Listening Tasks for Younger and Older Adults.

Audiology and Neuro-Otology

This study investigated the objective and subjective benefit of a second cochlear implant (CI) on binaural listening tasks of speech understanding in noise and localization in younger and older adults. We aimed to determine if the aging population can utilize binaural cues and obtain comparable benefits from bilateral CI (BIL_CI) when compared to the younger population.

Twenty-nine adults with severe to profound bilateral sensorineural hearing loss were included. Participants were evaluated in two conditions, better CI (BE_CI) alone and BIL_CI using AzBio and Bamford-Kowal-Bench (BKB) sentence in noise tests. Localization tasks were completed in the BIL_CI condition using a broadband stimulus, low-frequency stimuli, and high-frequency stimuli. A subjective questionnaire was administered to assess satisfaction with CI.

Older age was significantly associated with poorer performance on AzBio +5 dB signal-to-noise ratio (SNR) and BKB-speech in noise (SIN); however, improvements from BE_CI to BIL_CI were observed across all ages. In the AzBio +5 condition, nearly half of all participants achieved a significant improvement from BE_CI to BIL_CI with the majority of those occurring in patients younger than 65 years of age. Conversely, the majority of participants who achieved a significant improvement in BKB-SIN were adults >65 years of age. Years of BIL_CI experience and time between implants were not associated with performance. For localization, mean absolute error increased with age for low and high narrowband noise, but not for the broadband noise. Response gain was negatively correlated with age for all localization stimuli. Neither BIL_CI listening experience nor time between implants significantly impacted localization ability. Subjectively, participants report reduction in disability with the addition of the second CI. There is no observed relationship between age or speech recognition score and satisfaction with BIL_CI.

Overall performance on binaural listening tasks was poorer in older adults than in younger adults. However, older adults were able to achieve significant benefit from the addition of a second CI, and performance on binaural tasks was not correlated with overall device satisfaction. The significance of the improvement was task and stimulus dependent but suggested a critical limit may exist for optimal performance on SIN tasks for CI users. Specifically, older adults require at least a +8 dB SNR to understand 50% of speech postoperatively; therefore, solely utilizing a fixed +5 dB SNR preoperatively to qualify CI candidates is not recommended as this test condition may introduce limitations in demonstrating CI benefit.

Making sense of phantom limb pain.

Neurology, Neurosurgery and Psychiatry

Phantom limb pain (PLP) impacts the majority of individuals who undergo limb amputation. The PLP experience is highly heterogenous in its quality, ...

Effects of Sequential Bilateral Cochlear Implantation in Children: Evidence from Speech-Evoked Cortical Potentials and Tests of Speech Perception.

Audiology and Neuro-Otology

Benefits of bilateral cochlear implants (CI) may be compromised by delays to implantation of either ear. This study aimed to evaluate the effects of sequential bilateral CI use in children who received their first CI at young ages, using a clinical set-up.

One-channel cortical auditory evoked potentials and speech perception in quiet and noise were evoked at repeated times (0, 3, 6, 12 months of bilateral CI use) by unilateral and bilateral stimulation in 28 children with early-onset deafness. These children were unilaterally implanted before 3.69 years of age (mean ± SD of 1.98 ± 0.73 years) and received a second CI after 5.13 ± 2.37 years of unilateral CI use. Comparisons between unilaterally evoked responses were used to measure asymmetric function between the ears and comparisons between bilateral responses and each unilateral response were used to measure the bilateral benefit.

Chronic bilateral CI promoted changes in cortical auditory responses and speech perception performance; however, large asymmetries were present between the two unilateral responses despite ongoing bilateral CI use. Persistent cortical differences between the two sides at 1 year of bilateral stimulation were predicted by increasing age at the first surgery and inter-implant delay. Larger asymmetries in speech perception occurred with longer inter-implant delays. Bilateral responses were more similar to the unilateral responses from the first rather than the second CI.

These findings are consistent with the development of the aural preference syndrome and reinforce the importance of providing bilateral CIs simultaneously or sequentially with very short delays.

Poor Performer: A Distinct Entity in Cochlear Implant Users?

Audiology and Neuro-Otology

Several factors are known to influence speech perception in cochlear implant (CI) users. To date, the underlying mechanisms have not yet been fully clarified. Although many CI users achieve a high level of speech perception, a small percentage of patients does not or only slightly benefit from the CI (poor performer, PP). In a previous study, PP showed significantly poorer results on nonauditory-based cognitive and linguistic tests than CI users with a very high level of speech understanding (star performer, SP). We now investigate if PP also differs from the CI user with an average performance (average performer, AP) in cognitive and linguistic performance.

Seventeen adult postlingually deafened CI users with speech perception scores in quiet of 55 (9.32) % (AP) on the German Freiburg monosyllabic speech test at 65 dB underwent neurocognitive (attention, working memory, short- and long-term memory, verbal fluency, inhibition) and linguistic testing (word retrieval, lexical decision, phonological input lexicon). The results were compared to the performance of 15 PP (speech perception score of 15 [11.80] %) and 19 SP (speech perception score of 80 [4.85] %). For statistical analysis, U-Test and discrimination analysis have been done.

Significant differences between PP and AP were observed on linguistic tests, in Rapid Automatized Naming (RAN: p = 0.0026), lexical decision (LexDec: p = 0.026), phonological input lexicon (LEMO: p = 0.0085), and understanding of incomplete words (TRT: p = 0.0024). AP also had significantly better neurocognitive results than PP in the domains of attention (M3: p = 0.009) and working memory (OSPAN: p = 0.041; RST: p = 0.015) but not in delayed recall (delayed recall: p = 0.22), verbal fluency (verbal fluency: p = 0.084), and inhibition (Flanker: p = 0.35). In contrast, no differences were found hereby between AP and SP. Based on the TRT and the RAN, AP and PP could be separated in 100%.

The results indicate that PP constitute a distinct entity of CI users that differs even in nonauditory abilities from CI users with an average speech perception, especially with regard to rapid word retrieval either due to reduced phonological abilities or limited storage. Further studies should investigate if improved word retrieval by increased phonological and semantic training results in better speech perception in these CI users.

Antithrombotic therapy in the postacute phase of cervical artery dissection: the Italian Project on Stroke in Young Adults Cervical Artery Dissection.

Neurology, Neurosurgery and Psychiatry

To explore the impact of antithrombotic therapy discontinuation in the postacute phase of cervical artery dissection (CeAD) on the mid-term outcome of these patients.

In a cohort of consecutive patients with first-ever CeAD, enrolled in the setting of the multicentre Italian Project on Stroke in Young Adults Cervical Artery Dissection, we compared postacute (beyond 6 months since the index CeAD) outcomes between patients who discontinued antithrombotic therapy and patients who continued taking antithrombotic agents during follow-up. Primary outcome was a composite of ischaemic stroke and transient ischaemic attack. Secondary outcomes were (1) Brain ischaemia ipsilateral to the dissected vessel and (2) Recurrent CeAD. Associations with the outcome of interest were assessed by the propensity score (PS) method.

Of the 1390 patients whose data were available for the outcome analysis (median follow-up time in patients who did not experience outcome events, 36.0 months (25th-75th percentile, 62.0)), 201 (14.4%) discontinued antithrombotic treatment. Primary outcome occurred in 48 patients in the postacute phase of CeAD. In PS-matched samples (201 vs 201), the incidence of primary outcomes among patients taking antithrombotics was comparable with that among patients who discontinued antithrombotics during follow-up (5.0% vs 4.5%; p(log rank test)=0.526), and so was the incidence of the secondary outcomes ipsilateral brain ischaemia (4.5% vs 2.5%; p(log rank test)=0.132) and recurrent CeAD (1.0% vs 1.5%; p(log rank test)=0.798).

Discontinuation of antithrombotic therapy in the postacute phase of CeAD does not appear to increase the risk of brain ischaemia during follow-up.

Impact of previous disease-modifying treatment on safety and efficacy in patients with MS treated with AHSCT.

Neurology, Neurosurgery and Psychiatry

Autologous haematopoietic stem cell transplantation (AHSCT) is a highly effective treatment for multiple sclerosis (MS). The impact of previous long-lasting disease-modifying treatments (DMT) for safety and efficacy of AHSCT is unknown.

To explore whether previous DMTs with long-lasting effects on the immune system (anti-CD20 therapy, alemtuzumab and cladribine) affect treatment-related complications, long-term outcome and risk of new MS disease activity in patients treated with AHSCT.

Retrospective observational study of 104 relapsing remitting patients with MS treated by AHSCT in Sweden and Norway from 2011 to 2021, grouped according to the last DMT used ≤6 months prior to AHSCT. The primary outcomes were early AHSCT-related complications (mortality, neutropenic fever and hospitalisation length), long-term complications (secondary autoimmunity) and proportion of patients with No Evidence of Disease Activity (NEDA-3 status): no new relapses, no MRI activity and no disease progression during the follow-up.

The mean follow-up time was 39.5 months (range 1-95). Neutropenic fever was a common AHSCT-related complication affecting 69 (66%) patients. There was no treatment-related mortality. During the follow-up period, 20 patients (19%) were diagnosed with autoimmunity. Occurrence of neutropenic fever, hospitalisation length or secondary autoimmunity did not vary dependent on the last DMT used prior to AHSCT. A total of 84 patients (81%) achieved NEDA-3 status, including all patients (100%) using rituximab, alemtuzumab or cladribine before AHSCT.

This study provides level 4 evidence that AHSCT in patients previously treated with alemtuzumab, cladribine or rituximab is safe and efficacious.

Immune-Nutritional Status as a Novel Prognostic Predictor of Bell's Palsy.

Audiology and Neuro-Otology

The prognosis of Bell's palsy, idiopathic facial nerve palsy (FNP), is usually predicted by electroneuronography in subacute phase. However, it would be ideal to establish a reliable and objective examination applicable in acute phase to predict the prognosis of FNP. Immune-nutritional status (INS) calculated from peripheral blood examination is recently reported as the prognostic factor in various disease. However, the validity of INS as the prognostic factor in Bell's palsy is not well known. Thus, we conducted a retrospective study to investigate the usefulness of INS as prognostic predictors of Bell's palsy.

We reviewed the medical records of 79 patients with Bell's palsy and divided into two groups as "complete recovery" and "incomplete recovery" groups. Clinical features such as severity of FNP and INS, including neutrophil-lymphocyte ratio (NLR), lymphocyte-monocyte ratio (LMR), prognostic nutritional index (PNI), and controlling nutrition status (CONUT) score, were assessed.

In univariate analysis, statistically significant differences were observed in clinical score of facial movement, NLR, LMR, PNI, and CONUT score at the initial examination between the two groups (p < 0.05). Furthermore, in multivariate analysis, statistically significant differences were also observed in facial movement score and PNI at the initial examination (p < 0.05).

Immune and nutritional condition play important roles in the pathogenesis of Bell's palsy, suggesting that INS would be one of the useful prognostic factors in Bell's palsy.

Effect of Proximity to the Modiolus for the Cochlear CI532 Slim Modiolar Electrode Array on Evoked Compound Action Potentials and Programming Levels.

Audiology and Neuro-Otology

The first surgeries with CI532 showed an effect of the proximity of the electrode to the modiolus on the Evoked Compound Action Potentials (ECAPs).

Objectives of the study were to investigate the effect of the "pullback" procedure on intraoperative ECAP responses in three different electrode array positions and additionally to compare behavioral thresholds with the thresholds obtained in a group of patients using the standard insertion. The hypothesis of this study is that pullback will cause lower ECAPs and behavioral thresholds.

During insertion of the CI532 electrode array, ECAP was performed in three different positions for the pullback group: at initial insertion, at over-insertion, and after pullback. Insertion was monitored by fluoroscopy. In the standard group, ECAP was performed at the initial position, which is also the final position. ECAP thresholds (T-ECAPs) were compared within subjects at the initial and the final position in the pullback group and between groups in the final positions of the pullback and standard groups. Programming levels (C- and T-levels) were compared between the two groups 1 year after switch-on.

Intraoperative measurements pullback shows lower average T-ECAPs after pullback compared to thresholds in initial position. Comparison of intraoperative T-ECAPs at the final positions showed no statistically significant difference between the pullback group and the standard insertion group. Furthermore, 1 year after switch-on there was no statistically significant difference in C- and T-levels between the two groups.

The pullback maneuver of the CI532 electrode array after an over-insertion gave significantly lower T-ECAPs compared to the thresholds at the initial position. However, the between-groups analysis of pullback and standard insertion showed neither significantly different T-ECAPs nor different programming levels. Because T-ECAPs and programming levels vary considerably between subjects, large groups are required to detect differences between groups. Additionally, the effect pullback technique to preserving the residual hearing is not known yet.

Audiological and Surgical Correlates of Myringoplasty Associated with Ethnography in the Bay of Plenty, New Zealand.

Audiology and Neuro-Otology

This retrospective cohort study of myringoplasty performed at Tauranga Hospital, Bay of Plenty, New Zealand from 2010 to 2020 sought to identify predictive factors for successful myringoplasty with particular consideration given to the known high prevalence of middle ear conditions in New Zealand Māori.

Outcomes were surgical success (perforation closure at 1 month) and hearing improvement, which were correlated against demographic, pathological, and surgical variables.

174 patients underwent 221 procedures (139 in children under 18 years old), with 66.1% of patients being New Zealand Māori and 24.7% New Zealand European ethnicity. Normalized by population demographics, New Zealand Māori were 2.3 times overrepresented, whereas New Zealand Europeans were underrepresented by 0.34 times (a 6.8 times relative treatment differential). The rate of surgical success was 84.6%, independent of patient age, gender, and ethnicity. A postauricular approach and the use of temporalis fascia grafts were both correlated with optimal success rates, whereas early postoperative infection (<1 month) was correlated with ∼3 times increased failure. Myringoplasty improved hearing in 83.1% of patients (average air-bone gap reduction of 10.7 dB). New Zealand Māori patients had ∼4 times greater preoperative conductive hearing loss compared to New Zealand Europeans, but benefited the most from myringoplasty.

New Zealand Māori and pediatric populations required greater access to myringoplasty, achieving good surgical and audiological outcomes. Myringoplasty is highly effective and significantly improves hearing, particularly for New Zealand Māori. Pediatric success rates were equivalent to adults, supporting timely myringoplasty to minimize morbidity from untreated perforations.

Correction of the Estimated Hearing Level of NB Chirp ABR in Normal Hearing Population.

Audiology and Neuro-Otology

The narrowband chirp (NB Chirp), a frequency-specific sound stimulus signal obtained by limiting the frequency bandwidth based on chirp, is applied to the frequency specified auditory brainstem response (fsABR) increasingly. Although some studies demonstrated that NB Chirp-evoked auditory brainstem response (NB Chirp ABR) causes a better neural response than tone burst-evoked auditory brainstem response and is preferred for fsABR, there is little known about how to better estimate an individual's hearing level through the threshold of NB Chirp ABR. The present study intended to compare the accuracy and deviation of NB Chirp ABR corrected by different approaches in estimating the hearing level of people with normal hearing.

A total of 66 volunteers with normal hearing were randomly divided into a model group (n = 26), test group 1 (n = 20), and test group 2 (n = 20). The model group was used to calculate the threshold difference between NB Chirp ABR and pure-tone audiometry at 500 Hz, 1,000 Hz, 2,000 Hz, and 4,000 Hz, as well as the regression equation, providing a reference for the correction of estimated hearing level of NB Chirp ABR. Test group 1 was used to observe the accuracy and deviation of the "noncorrection (N)," "threshold difference (A1)," and "regression equation (A2)" methods in correcting the estimated hearing level of NB Chirp ABR. Test group 2 was used to replicate the analysis of test group 1 to verify the repeatability of the experimental results. All data were analyzed using SPSS 24.0.

Test group 1 and test group 2 had similar results. First, the accuracy of the estimated hearing level of N was significantly higher than that of A1 or A2. Second, compared with "0," the deviation of the estimated hearing level of N was bigger than that of A1 or A2 at 500 Hz and 1,000 Hz, while similar at 2,000 Hz and 4,000 Hz. Finally, there was no significant difference in the deviation of the estimated hearing level between A1 and A2 at 500 Hz and 1,000 Hz.

Among people with normal hearing, it was necessary to correct NB Chirp ABR at 500 Hz and 1,000 Hz for lower deviation of the estimated hearing level. Both correction approaches based on threshold difference and regression equation can be used.

Sound Source Localization by Cochlear Implant Recipients with Normal Hearing in the Contralateral Ear: Effects of Spectral Content and Duration of Listening Experience.

Audiology and Neuro-Otology

Cochlear implant (CI) recipients with normal hearing (NH) in the contralateral ear experience a significant improvement in sound source localization when listening with the CI in combination with their NH-ear (CI + NH) as compared to with the NH-ear alone. The improvement in localization is primarily due to sensitivity to interaural level differences (ILDs). Sensitivity to interaural timing differences (ITDs) may be limited by auditory aging, frequency-to-place mismatches, the signal coding strategy, and duration of CI use. The present report assessed the sensitivity of ILD and ITD cues in CI + NH listeners who were recipients of long electrode arrays that provide minimal frequency-to-place mismatches and were mapped with a coding strategy that presents fine structure cues on apical channels.

Sensitivity to ILDs and ITDs for localization was assessed using broadband noise (BBN), as well as high-pass (HP) and low-pass (LP) filtered noise for adult CI + NH listeners. Stimuli were 200-ms noise bursts presented from 11 speakers spaced evenly over an 180° arc. Performance was quantified in root-mean-squared error and response patterns were analyzed to evaluate the consistency, accuracy, and side bias of the responses. Fifteen listeners completed the task at the 2-year post-activation visit; seven listeners repeated the task at a later annual visit.

Performance at the 2-year visit was best with the BBN and HP stimuli and poorer with the LP stimulus. Responses to the BBN and HP stimuli were significantly correlated, consistent with the idea that CI + NH listeners primarily use ILD cues for localization. For the LP stimulus, some listeners responded consistently and accurately and with limited side bias, which may indicate sensitivity to ITD cues. Two of the 7 listeners who repeated the task at a later annual visit experienced a significant improvement in performance with the LP stimulus, which may indicate that sensitivity to ITD cues may improve with long-term CI use.

CI recipients with a NH-ear primarily use ILD cues for sound source localization, though some may use ITD cues as well. Sensitivity to ITD cues may improve with long-term CI listening experience.

Evaluating the Effectiveness of a New Auditory Training Program on the Speech Recognition Skills and Auditory Event-Related Potentials in Elderly Hearing Aid Users.

Audiology and Neuro-Otology

The objective of this study was to evaluate the effectiveness of a new auditory training (AT) program on the speech recognition in the noise and on the auditory event-related potentials in elderly hearing aid users.

Thirty-three elderly individuals using hearing aids aged from 60 to 80 years participated. A new AT program was developed for the study. AT program lasts for 8 weeks and includes sound discrimination exercises and cognitive exercises. Seventeen individuals (mean age 72.17 ± 6.94) received AT and 16 individuals (mean age 71.75 ± 6.81) did not receive AT. The mismatch negativity (MMN) test and matrix test were used to evaluate the effectiveness of AT. Tests were conducted for the study group before and after the AT. The tests were carried out for the control group at the same times with the study group and the results were compared.

In comparison with the first evaluation, the last evaluation of the study group demonstrated a significant difference regarding the decrease of mean latency in the MMN wave (p = 0.038), and regarding the improving score of matrix test (p = 0.004), there was no difference in the control group.

The AT program prepared for the study was effective in improving speech recognition in noise in the elderly, and the efficiency of AT could be demonstrated with MMN and matrix test.