Exploring the Label Feedback Effect: The Roles of Object Clarity and Relative Prevalence of Target Labels During Visual Search

157583-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, reducing response times and increasing accuracy.

The label-feedback hypothesis (Lupyan, 2007, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, reducing response times and increasing accuracy. Hebert, Goldinger, and Walenchok (under review) used a modified design to replicate and extend this finding, and concluded that speaking modulates visual search via template integrity. The current series of experiments 1) replicated the work of Hebert et al. with audio stimuli played through headphones instead of self-directed speech, 2) examined the label feedback effect under conditions of varying object clarity, and 3) explored whether the relative prevalence of a target’s audio label might modulate the label feedback effect (as in the low prevalence effect; Wolfe, Horowitz, & Kenner, 2005). Paradigms utilized both traditional spatial visual search and repeated serial visual presentation (RSVP). Results substantiated those found in previous studies—hearing target names improved performance, even (and sometimes especially) when conditions were difficult or noisy, and the relative prevalence of a target’s audio label strongly impacted its perception. The mechanisms of the label feedback effect––namely, priming and target template integrity––are explored.
Date Created
2019
Agent

Improving sentence comprehension post-stroke using neuroimaging and neuropsychological approaches

157084-Thumbnail Image.png
Description
Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve

Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of attention (alerting, orienting, executive control) were measured in persons with aphasia and matched-controls using visual and auditory versions of the well-studied Attention Network Test. Experiment 1b then explored the neural resources supporting each component of attention in the visual and auditory modalities in chronic stroke participants. The results from Experiment 1a indicate that alerting, orienting, and executive control are uniquely affected by presentation modality. The lesion-symptom mapping results from Experiment 1b associated the left angular gyrus with visual executive control, the left supramarginal gyrus with auditory alerting, and Broca’s area (pars opercularis) with auditory orienting attention post-stroke. Overall, these findings indicate that perceptual modality may impact the lateralization of some aspects of attention, thus auditory attention may be more susceptible to impairment after a left hemisphere stroke.

Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
Date Created
2019
Agent

Interactions Between Prosody and Cognition During Sentence Comprehension: A Behavioral Study

132367-Thumbnail Image.png
Description
Previous research has determined that sentence comprehension is affected when taxing an individual’s cognitive resources, such as attentional control and working memory. This can be done by manipulating the prosody of simple and complex sentences, by allowing irregular rhythm and

Previous research has determined that sentence comprehension is affected when taxing an individual’s cognitive resources, such as attentional control and working memory. This can be done by manipulating the prosody of simple and complex sentences, by allowing irregular rhythm and pitch changes to occur within speech. In the present thesis, neurotypical adults were asked to comprehend sentences with normal and monotone prosody in three different versions of a sentence-picture matching task. A no-load version served as a control with the other two taxing cognitive resources in these individuals. In addition, individuals completed four other tasks that are known to reliably measure working memory. Our results indicate a possible relationship between high accuracy in complex sentences spoken in a monotone prosody with working memory when time restraints are placed on individuals. Collectively, these results may lead to a new way of working with individuals in speech therapy who have suffered a stroke by better understanding the cognitive resources that are taxed in different types of sentence comprehension settings.
Date Created
2019-05
Agent

Does Auditory Feedback Perturbation Influence Categorical Perception of Vowels?

133014-Thumbnail Image.png
Description
Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes

Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies). In other words, previous studies have examined the effects of altered speech perception on speech production. However, in this study, we examined potential effects of speech production on speech perception. Subjects completed a block of a categorical perception task followed by a block of a speaking or a listening task followed by another block of the categorical perception task. Subjects completed three blocks of the speaking task and three blocks of the listening task. In the three blocks of a given task (speaking or listening) auditory feedback was 1) normal, 2) altered to be less variable, or 3) altered to be more variable. Unlike previous studies, we used subject’s own speech samples to generate speech stimuli for the perception task. For each categorical perception block, we calculated subject’s psychometric function and determined subject’s categorical boundary. The results showed that subjects’ perceptual boundary remained stable in all conditions and all blocks. Overall, our results did not provide evidence for the effects of speech production on speech perception.
Date Created
2019-05
Agent

The role of primary motor cortex in second language word recognition

156177-Thumbnail Image.png
Description
The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it

The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.
Date Created
2018
Agent

The neurobiology of sentence comprehension: an fMRI study of late American Sign Language acquisition

135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
Date Created
2016-05
Agent

A Functional and Structural MRI Investigation of the Neural Signatures of Dyslexia in Adults

134926-Thumbnail Image.png
Description
The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in

The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has explored the neural correlations of hemispheric lateralization and phonemic awareness in dyslexia. The present study investigated the underlying neurobiology of five adults with dyslexia compared to age- and sex-matched control subjects using structural and functional magnetic resonance imaging. All subjects completed a large battery of behavioral tasks as part of a larger study and underwent functional and structural MRI acquisition. This data was collected and preprocessed at the University of Washington. Analyses focused on examining the neural correlates of hemispheric lateralization, letter reversal mistakes, reduced processing speed, and phonemic awareness. There were no significant findings of hemispheric differences between subjects with dyslexia and controls. The subject making the largest amount of letter reversal errors had deactivation in their cerebellum during the fMRI language task. Cerebellar white matter volume and surface area of the premotor cortex was the largest in the individual with the slowest reaction time to tapping. Phonemic decoding efficiency had a high correlation with neural activation in the primary motor cortex during the fMRI motor task (r=0.6). Findings from the present study suggest that brain regions utilized during motor control, such as the cerebellum, premotor cortex, and primary motor cortex, may have a larger role in dyslexia then previously considered. Future studies are needed to further distinguish the role of the cerebellum and other motor regions in relation to motor control and language processing deficits related to dyslexia.
Date Created
2016-12
Agent

The effect of scale and familiarity on the perception of music "dissonance"

134813-Thumbnail Image.png
Description
Music is part of cultures all over the world and is entrenched in our daily lives, and yet little is known about the neural pathways responsible for how we perceive music. The property of "dissonance" is central to our understanding

Music is part of cultures all over the world and is entrenched in our daily lives, and yet little is known about the neural pathways responsible for how we perceive music. The property of "dissonance" is central to our understanding of the emotional meaning in music, and this study is a preliminary step in understanding how this property of music is perceived. Twenty-four participants with normal hearing listened to melodies and ranked their degrees of dissonance. Melodies that are categorized as "dissonant" according to Western music theory were ranked as more "dissonant" to a significant degree across the 9 conditions (3 conditions of scale: Major, Neapolitan Minor, and Oriental; 3 conditions of wrong notes: no wrong notes, diatonic wrong notes, and non-diatonic wrong notes). As expected, the familiar Major scale was identified as more consonant across all wrong note conditions than the other scales. Notably, a significant interaction was found, with diatonic and non-diatonic notes not perceived differently in both of the unfamiliar scales, Neapolitan and Oriental. This study suggests that the context of musical scale does influence how we create expectations of music and perceive dissonance. Future studies are necessary to understand the mechanisms by which scales drive these expectations.
Date Created
2016-12
Agent

Is Cognitive Control Reliable? When means are not enough

135887-Thumbnail Image.png
Description
Most theories of cognitive control assume goal-directed behavior takes the form of performance monitor-executive function-action loop. Recent theories focus on how a single performance monitoring mechanism recruits executive function - dubbed single-process accounts. Namely, the conflict-monitoring hypothesis proposes that a

Most theories of cognitive control assume goal-directed behavior takes the form of performance monitor-executive function-action loop. Recent theories focus on how a single performance monitoring mechanism recruits executive function - dubbed single-process accounts. Namely, the conflict-monitoring hypothesis proposes that a single performance monitoring mechanism, housed in the anterior cingulate cortex, recruits executive functions for top-down control. This top-down control manifests as trial-to-trial micro adjustments to the speed and accuracy of responses. If these effects are produced by a single performance monitoring mechanism, then the size of these sequential trial-to-trial effects should be correlated across tasks. To this end, we conducted a large-scale (N=125) individual differences experiment to examine whether two sequential effects - the Gratton effect and error-related slowing effect - are correlated across a Simon, Flanker, and Stroop task. We find weak correlations for these effects across tasks which is inconsistent with single-process accounts.
Date Created
2015-12
Agent

Partially Overlapping Sensorimotor Networks Underlie Speech Praxis and Verbal Short-Term Memory: Evidence From Apraxia of Speech Following Acute Stroke

128216-Thumbnail Image.png
Description

We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM) depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in

We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM) depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in motor planning of speech articulation (apraxia of speech, AOS) and vSTM in the first day of stroke, before the opportunity for recovery or reorganization of structure-function relationships. We also evaluated areas of both infarct and low blood flow that might have contributed to AOS or impaired vSTM in each person. We found that AOS was associated with tissue dysfunction in motor-related areas (posterior primary motor cortex, pars opercularis; premotor cortex, insula) and sensory-related areas (primary somatosensory cortex, secondary somatosensory cortex, parietal operculum/auditory cortex); while impaired vSTM was associated with primarily motor-related areas (pars opercularis and pars triangularis, premotor cortex, and primary motor cortex). These results are consistent with the hypothesis, also supported by functional imaging data, that both speech praxis and vSTM rely on partially overlapping networks of brain regions.

Date Created
2014-08-25
Agent