Using Acoustic Analysis to Identify Orofacial Myofunctional Disorder in Speakers

131325-Thumbnail Image.png
Description
The purpose of this study was to explore the relationship between acoustic indicators in speech and the presence of orofacial myofunctional disorder (OMD). This study analyzed the first and second formant frequencies (F1 and F2) of the four corner vowels

The purpose of this study was to explore the relationship between acoustic indicators in speech and the presence of orofacial myofunctional disorder (OMD). This study analyzed the first and second formant frequencies (F1 and F2) of the four corner vowels [/i/, /u/, /æ/ and /ɑ/] found in the spontaneous speech of thirty participants. It was predicted that speakers with orofacial myofunctional disorder would have a raised F1 and F2 because of habitual low and anterior tongue positioning. This study concluded no significant statistical differences in the formant frequencies. Further inspection of the total vowel space area of the OMD speakers suggested that OMD speakers had a smaller, more centralized vowel space. We concluded that more study of the total vowel space area for OMD speakers is warranted.
Date Created
2020-05
Agent

Using Transcranial Alternating Current Stimulation to Entrain Cortical Oscillations

131570-Thumbnail Image.png
Description
Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study,

Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies (6Hz, 12Hz, and 22Hz), transcranial random noise stimulation (tRNS), and a no-stimulation sham condition. In all stimulation conditions, we recorded electroencephalographic data to investigate the link between different frequencies of tACS and their effects on brain oscillations. We recruited 12 healthy participants. Each participant completed 30 trials of the stimulation conditions. In a given trial, we recorded brain activity for 10 seconds, stimulated for 12 seconds, and recorded an additional 10 seconds of brain activity. The difference between the average oscillation power before and after a stimulation condition indicated change in oscillation amplitude due to the stimulation. Our results showed the stimulation conditions entrained brain activity of a sub-group of participants.
Date Created
2020-05
Agent

Speech Motor Learning Depends on Relevant Auditory Errors

131919-Thumbnail Image.png
Description
In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the

In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the prediction of their speech matches their production. The speech motor system uses auditory goals to determine errors in its auditory output during vowel production. We learn from discrepancies between our prediction and auditory feedback. In this study, we examined error assessment processes by systematically manipulating the correspondence between speech motor outputs and their auditory consequences while producing speech. We conducted a study (n = 14 adults) in which participants’ auditory feedback was perturbed to test their learning rate in two conditions. During the trials, participants repeated CVC words and were instructed to prolong the vowel each time. The adaptation trials were used to examine the reliance of auditory feedback and speech prediction by systematically changing the weight of auditory feedback. Participants heard their perturbed feedback through insert earphones in real time. Each speaker’s auditory feedback was perturbed according to task-relevant and task-irrelevant errors. Then, these perturbations were presented to subjects gradually and suddenly in the study. We found that adaptation was less extensive with task-irrelevant errors, adaptation did not saturate significantly in the sudden condition, and adaptation, which was expected to be extensive and faster in the task-relevant condition, was closer to the rate of adaptation in the task-irrelevant perturbation. Though adjustments are necessary, we found an efficient way for speakers to rely on auditory feedback more than their prediction. Furthermore, this research opens the door to future investigations in adaptation in speech and presents implications for clinical purposes (e.g. speech therapy).
Date Created
2020-05
Agent

Specificity of Auditory Modulation during Speech Planning

131951-Thumbnail Image.png
Description
Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven adult
subjects in both speaking (speech planning) and silent reading (no speech planning) conditions.
Data analysis was accomplished manually as well as via generation of a MATLAB code to
combine data sets and calculate auditory modulation (suppression). Results of the P200
modulation showed that modulation was larger for incongruent stimuli than congruent stimuli.
However, this was not the case for the N100 modulation. The data for pure tone could not be
analyzed because the intensity of this stimulus was substantially lower than that of the speech
stimuli. Overall, the results indicated that the P200 component plays a significant role in
processing stimuli and determining the relevance of stimuli; this result is consistent with role of
P200 component in high-level analysis of speech and perceptual processing. This experiment is
ongoing, and we hope to obtain data from more subjects to support the current findings.
Date Created
2020-05
Agent

Determining Predictors of Acquisition for /r/ Using Acoustic Signal Processing

132284-Thumbnail Image.png
Description
This longitudinal study aimed to determine whether significant differences existed between the baseline inaccurate signals of the /r/ phoneme for children that eventually acquire or do not acquire /r/. Seventeen participants ages 5-8 who had not acquired /r/ in any

This longitudinal study aimed to determine whether significant differences existed between the baseline inaccurate signals of the /r/ phoneme for children that eventually acquire or do not acquire /r/. Seventeen participants ages 5-8 who had not acquired /r/ in any of its allophonic contexts were recorded approximately every 3 months from the age of recruitment until they either acquired /r/ in conversation (80% accuracy) or turned eight years old. The recorded audio files were trimmed and labelled using Praat, and signal processing was used to compare initial and final recordings of three allophonic variations of /r/ (vocalic, prevocalic, postvocalic) for each participant. Differences were described using Mel-log Spectral plots. For each age group, initial recordings of participants that eventually acquired /r/ were compared to those of participants that did not acquire /r/. Participants that had not acquired /r/ and had yet to turn eight years old were compared by whether they were perceived to be improving or perceived not to be improving. Significant differences in Mel-log spectral plots will be discussed, and the implications of baseline differences will be highlighted, specifically with respect to the feasibility of identifying predictive markers for acquisition
on-acquisition of the difficult /r/ phoneme.
Date Created
2019-05
Agent

The Effect of Spectral Resolution of Auditory Feedback on Speech Production of Normal-­hearing Listeners

132359-Thumbnail Image.png
Description
Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-­time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-­time CI
simulation. The results showed that sentence recognition scores with the real-­time CI simulation
improved with more channels, similar to those with the traditional off-­line CI simulation.
Perception of a vowel continuum “HEAD”-­ “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-­band CI simulation.
Date Created
2019-05
Agent

Learning Rate in Auditory Motor Adaptation

132557-Thumbnail Image.png
Description
Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase

Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase the rate of adaptation by using different auditory feedback conditions. For the present study, we recruited a total of 30 participants. We examined auditory motor adaptation after participants completed three conditions: Normal speaking, noise-masked speaking, and silent reading. The normal condition was used as a control condition. In the noise-masked condition, noise was added to the auditory feedback to completely mask speech outputs. In the silent reading condition, participants were instructed to silently read target words in their heads, then read the words out loud. We found that the learning rate in the noise-masked condition was lower than that in the normal condition. In contrast, participants adapted at a faster rate after they experience the silent reading condition. Overall, this study demonstrated that adaptation rate can be modified through pre-exposing participants to different types auditory-motor manipulations.
Date Created
2019-05
Agent

Does Auditory Feedback Perturbation Influence Categorical Perception of Vowels?

133014-Thumbnail Image.png
Description
Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes

Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies). In other words, previous studies have examined the effects of altered speech perception on speech production. However, in this study, we examined potential effects of speech production on speech perception. Subjects completed a block of a categorical perception task followed by a block of a speaking or a listening task followed by another block of the categorical perception task. Subjects completed three blocks of the speaking task and three blocks of the listening task. In the three blocks of a given task (speaking or listening) auditory feedback was 1) normal, 2) altered to be less variable, or 3) altered to be more variable. Unlike previous studies, we used subject’s own speech samples to generate speech stimuli for the perception task. For each categorical perception block, we calculated subject’s psychometric function and determined subject’s categorical boundary. The results showed that subjects’ perceptual boundary remained stable in all conditions and all blocks. Overall, our results did not provide evidence for the effects of speech production on speech perception.
Date Created
2019-05
Agent

Compensatory Responses During Unexpected Vowel Perturbations

133025-Thumbnail Image.png
Description
During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech

During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this study, we designed an experiment that examined the compensatory responses in response to unexpected vowel perturbations during speech. We applied two types of perturbations. In one condition, the vowel /ɛ/ was perturbed toward the vowel /æ/ by simultaneously shifting both the first formant (F1) and the second formant (F2) at 3 different levels (.5=small, 1=medium, and 1.5=large shifts). In another condition, the vowel /ɛ/ was perturbed by shifting F1 at 3 different levels (small, medium, and large shifts). Our results showed that there was a significant perturbation-type effect, with participants compensating more in response to perturbation that shifted /ɛ/ toward /æ/. In addition, we found that there was a significant level effect, with the compensatory responses to level .5 being significantly smaller than the compensatory responses to levels 1 and 1.5, regardless of the perturbation pathway. We also found that responses to shift level 1 and shift level 1.5 did not differ. Overall, our results highlighted the importance of the auditory feedback loop during speech production and how the brain is more sensitive to auditory errors that change a vowel category (e.g., /ɛ/ to /æ/).
Date Created
2019-05
Agent

Somatosensory Modulation during Speech Planning

133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
Date Created
2019-05
Agent