Music-Remixing Preferences of Prelingual and Postlingual Cochlear Implant Users

193497-Thumbnail Image.png
Description
The poor spectral and temporal resolution of cochlear implants (CIs) limit their users’ music enjoyment. Remixing music by boosting vocals while attenuating spectrally complex instruments has been shown to benefit music enjoyment of postlingually deaf CI users. However, the effectiveness

The poor spectral and temporal resolution of cochlear implants (CIs) limit their users’ music enjoyment. Remixing music by boosting vocals while attenuating spectrally complex instruments has been shown to benefit music enjoyment of postlingually deaf CI users. However, the effectiveness of music remixing in prelingually deaf CI users is still unknown. This study compared the music-remixing preferences of nine postlingually deaf, late-implanted CI users and seven prelingually deaf, early-implanted CI users, as well as their ratings of song familiarity and vocal pleasantness. Twelve songs were selected from the most streamed tracks on Spotify for testing. There were six remixed versions of each song: Original, Music-6 (6-dB attenuation of all instruments), Music-12 (12-dB attenuation of all instruments), Music-3-3-12 (3-dB attenuation of bass and drums and 12-dB attenuation of other instruments), Vocals-6 (6-dB attenuation of vocals), and Vocals-12 (12-dB attenuation of vocals). It was found that the prelingual group preferred the Music-6 and Original versions over the other versions, while the postlingual group preferred the Vocals-12 version over the Music-12 version. The prelingual group was more familiar with the songs than the postlingual group. However, the song familiarity rating did not significantly affect the patterns of preference ratings in each group. The prelingual group also had higher vocal pleasantness ratings than the postlingual group. For the prelingual group, higher vocal pleasantness led to higher preference ratings for the Music-12 version. For the postlingual group, their overall preference for the Vocals-12 version was driven by their preference ratings for songs with very unpleasant vocals. These results suggest that the patient factor of auditory experience and stimulus factor of vocal pleasantness may affect the music-remixing preferences of CI users. As such, the music-remixing strategy needs to be customized for individual patients and songs.
Date Created
2024
Agent

The Mechanisms of Auditory Training with Cochlear Implant Simulations

193496-Thumbnail Image.png
Description
Cochlear implants (CIs) restore hearing to nearly one million individuals with severe-to-profound hearing loss. However, with limited spectral and temporal resolution, CI users may rely heavily on top-down processing using cognitive resources for speech recognition in noise, and change the

Cochlear implants (CIs) restore hearing to nearly one million individuals with severe-to-profound hearing loss. However, with limited spectral and temporal resolution, CI users may rely heavily on top-down processing using cognitive resources for speech recognition in noise, and change the weighting of different acoustic cues for pitch-related listening tasks such as Mandarin tone recognition. While auditory training is known to improve CI users’ performance in these tasks as measured by percent correct scores, the effects of training on cue weighting, listening effort, and untrained tasks need to be better understood, in order to maximize the training benefits. This dissertation addressed these questions by training normal-hearing (NH) listeners listening to CI simulation. Study 1 examined whether Mandarin tone recognition training with enhanced amplitude envelope cues may improve tone recognition scores and increase the weighting of amplitude envelope cues over fundamental frequency (F0) contours. Compared to no training or natural-amplitude-envelope training, enhanced-amplitude-envelope training increased the benefits of amplitude envelope enhancement for tone recognition but did not increase the weighting of amplitude or F0 cues. Listeners attending more to amplitude envelope cues in the pre-test improved more in tone recognition after enhanced-amplitude-envelope training. Study 2 extended Study 1 to compare the generalization effects of tone recognition training alone, vowel recognition training alone, and combined tone and vowel recognition training. The results showed that tone recognition training did not improve vowel recognition or vice versa, although tones and vowels are always produced together in Mandarin. Only combined tone and vowel recognition training improved sentence recognition, showing that both suprasegmental (i.e., tones) and segmental cues (i.e., vowels) were essential for sentence recognition in Mandarin. Study 3 investigated the impact of phoneme recognition training on listening effort of sentence recognition in noise, as measured by a dual-task paradigm, pupillometry, and subjective ratings. It was found that phoneme recognition training improved sentence recognition in noise. The dual-task paradigm and pupillometry indicated that from pre-test to post-test, listening effort reduced in the control group without training, but remained unchanged in the training group. This suggests that training may have motivated listeners to stay focused on the challenging task of sentence recognition in noise. Overall, non-clinical measures such as cue weighting and listening effort can enrich our understanding of the training-induced perceptual and cognitive effects, and allow us to better predict and assess the training outcomes.
Date Created
2024
Agent

'Active on Passive' Social Media Usage and its Effects on Self-Improvement Motives

193370-Thumbnail Image.png
Description
Social media (SM) has grown to become a recognized phenomenon across the world affecting billions of users daily. Currently, research has focused on two areas of social media usage: Active and Passive. However, the development and proposition of an additional

Social media (SM) has grown to become a recognized phenomenon across the world affecting billions of users daily. Currently, research has focused on two areas of social media usage: Active and Passive. However, the development and proposition of an additional type of use should be considered as SM grows and its prevalence raises concerns for future generations. This study aims to introduce a third type of social media use known as ‘Active on Passive’; it is defined as the actions on social media to engage with oneself without inherently interacting with others. This term was developed to categorize ‘saving posts’ into a SM usage type; it was measured by the number of saved posts an individual had on a specific platform. Using this variable, the present research measured how ‘active on passive’ social media usage could be associated with self-improvement motives and negative affect. Although no statistical significance was observed between these factors, exploratory analyses between these variables were discussed. Offering new insight on future directions into the proposition of ‘Active on Passive’ social media usage.
Date Created
2024
Agent

Assessing the Influence of Apple AirPods with Live Listen feature on Speech Recognition and Memory Retention in Noise Levels Simulating Noisy Healthcare Settings - Insights from QuickSIN

193345-Thumbnail Image.png
Description
This study aimed to evaluate the efficacy of Apple AirPods pro (2nd generation) Live Listen feature in enhancing word recognition and memory retention among individuals with varying degrees of hearing loss, as determined by their Signal-to-Noise Ratio (SNR) loss. Utilizing

This study aimed to evaluate the efficacy of Apple AirPods pro (2nd generation) Live Listen feature in enhancing word recognition and memory retention among individuals with varying degrees of hearing loss, as determined by their Signal-to-Noise Ratio (SNR) loss. Utilizing a single-group experimental design, the research measured participants' performance on word recognition and memory retention tasks with and without the Live Listen feature. Statistical analysis, including paired t-tests and linear regression, revealed significant improvements in word recognition (from 81.8% to 94.4%) and memory retention (from 43.8% to 59.4%) scores when the Live Listen feature was activated. Moreover, a positive correlation between SNR loss and recognition score improvements suggested a greater benefit for those with higher levels of hearing loss. However, the relationship with memory retention improvements was less pronounced. These findings underscore the potential of the Live Listen feature as an effective assistive listening device, highlighting its importance in enhancing auditory experiences for individuals with hearing impairments and encouraging further research into personalized auditory assistance technologies in noisy healthcare environments.
Date Created
2024
Agent

Repeatability and Accuracy of a Widely-Available Voice-Based Stress Analysis Tool

189375-Thumbnail Image.png
Description
Stress, depression, and anxiety are prevailing mental health issues that affect individuals worldwide. As the search for effective solutions continues, advancements in technology have led to the development of digital tools for stress identification and management purposes. The Cigna StressWaves

Stress, depression, and anxiety are prevailing mental health issues that affect individuals worldwide. As the search for effective solutions continues, advancements in technology have led to the development of digital tools for stress identification and management purposes. The Cigna StressWaves Test (CSWT) is a publicly available stress analysis toolkit that claims to use “clinical-grade” artificial intelligence (AI) technology to evaluate individual stress levels through speech. To investigate their claim, this research stands as an independent validation study involving 60 participants over the age of 18. The primary objective of the study is to assess the repeatability and efficacy of the CSWT as a stress measurement tool. Key results indicate the CSWT lacks test-retest reliability and convergent validity. This implies that the CWST is not a repeatable tool and does not provide similar stress outcomes relative to an established measure of stress, the Perceived Stress Scale (PSS). These findings cast doubt on the accuracy and effectiveness of the CWST as a stress assessment tool. The public availability of the CSWT and the claim that it is a “clinical-grade” tool highlights concerns regarding premature deployment of digital health tools for stress management.
Date Created
2023
Agent

The Effect of Bimodal Hearing on Speech Intonation Production in Adult Cochlear Implant Users

171425-Thumbnail Image.png
Description
Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users

Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH) children. In contrast, post-lingually deaf adult CI users have developed speech production skills via normal hearing before deafness and implantation. Further, combined electric hearing (via CI) and acoustic hearing (via hearing aid, HA) may improve CI users’ perception of pitch cues in speech intonation. Therefore, this study tested (1) whether post-lingually deaf adult CI users have similar speech intonation production to NH adults and (2) whether their speech intonation production improves with auditory feedback via CI+HA (i.e., bimodal hearing). Eight post-lingually deaf adult bimodal CI users and nine NH adults participated in this study. 10 question-and-answer dialogues with an experimenter were used to elicit 10 pairs of syntactically matched questions and statements from each participant. Bimodal CI users were tested under four hearing conditions: no-device (ND), HA, CI, and CI+HA. F0 change, intensity change, and duration ratio between the last two syllables of each utterance were analyzed to evaluate the quality of speech intonation production. The results showed no significant differences between CI and NH participants in any of the acoustic features of questions and statements. For CI participants, the CI+HA condition led to significantly greater F0 decreases of statements than the ND condition, while the ND condition led to significantly greater duration ratios of questions and statements. These results suggest that bimodal CI users change the use of prosodic cues for speech intonation production in different hearing conditions and access to auditory feedback via CI+HA may improve their voice pitch control to produce more salient statement intonation contours.
Date Created
2022
Agent

A Qualitative Systematic Review on Music Training Programs with Focus on Enhancing Cochlear Implant Users’ Music Appreciation

161777-Thumbnail Image.png
Description
Music is an important part of everyday life. It plays a crucial role for human connection and provides a communication network for emotions. Hearing loss can negatively impact the music experience. Although Cochlear Implants (CI) enable individuals with severe to

Music is an important part of everyday life. It plays a crucial role for human connection and provides a communication network for emotions. Hearing loss can negatively impact the music experience. Although Cochlear Implants (CI) enable individuals with severe to profound hearing loss to successfully understand spoken language, many users find their experience with music less than satisfactory. Music training programs may offer a hopeful solution to recondition the music experience for CI users. However, music training programs available to CI users today generally carry more weight on improving the perceptual accuracy of music rather than enhancing appreciation and enjoyment. The primary objective of this review is to identify different types of music training programs and their connection to music appreciation. A brief overview of the factors that contribute to music appreciation are also provided.
Date Created
2021
Agent

Vocal Emotion Production of Pre-lingually Deafened Cochlear Implant Children with Residual Acoustic Hearing

Description

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.

Date Created
2021-05
Agent

The Effect of Transcranial Alternating Current Stimulation on Speech Motor Learning

147824-Thumbnail Image.png
Description

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In this study, we used an auditory-motor adaptation task as an experimental model of speech motor learning. Subjects repeated words while receiving formant shifts, which made the subjects’ speech sound different from their production. During the adaptation task, subjects received Beta (20 Hz), Alpha (10 Hz), or Sham stimulation. We applied the stimulation to the ventral motor cortex that is involved in planning speech movements. We found that the stimulation did not influence the magnitude of adaptation. We suggest that some limitations of the study may have contributed to the negative results.

Date Created
2021-05
Agent

The Effect of Spectral Resolution of Auditory Feedback on Speech Production of Normal-­hearing Listeners

132359-Thumbnail Image.png
Description
Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-­time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-­time CI
simulation. The results showed that sentence recognition scores with the real-­time CI simulation
improved with more channels, similar to those with the traditional off-­line CI simulation.
Perception of a vowel continuum “HEAD”-­ “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-­band CI simulation.
Date Created
2019-05
Agent