Exploring the Label Feedback Effect: The Roles of Object Clarity and Relative Prevalence of Target Labels During Visual Search

157583-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, reducing response times and increasing accuracy.

The label-feedback hypothesis (Lupyan, 2007, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, reducing response times and increasing accuracy. Hebert, Goldinger, and Walenchok (under review) used a modified design to replicate and extend this finding, and concluded that speaking modulates visual search via template integrity. The current series of experiments 1) replicated the work of Hebert et al. with audio stimuli played through headphones instead of self-directed speech, 2) examined the label feedback effect under conditions of varying object clarity, and 3) explored whether the relative prevalence of a target’s audio label might modulate the label feedback effect (as in the low prevalence effect; Wolfe, Horowitz, & Kenner, 2005). Paradigms utilized both traditional spatial visual search and repeated serial visual presentation (RSVP). Results substantiated those found in previous studies—hearing target names improved performance, even (and sometimes especially) when conditions were difficult or noisy, and the relative prevalence of a target’s audio label strongly impacted its perception. The mechanisms of the label feedback effect––namely, priming and target template integrity––are explored.
Date Created
2019
Agent

Investigating the Relationship Between Visual Confirmation Bias and the Low-Prevalence Effect in Visual Search

156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
Date Created
2018
Agent

Isolating Neural Reward-Related Responses via Pupillometry

156027-Thumbnail Image.png
Description
Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential

Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential neural signal of reward remain. Consequently, this study used pupillometry to investigate how reward-related stimuli affect pupil size and attention. Across three experiments, response time, accuracy, and pupil were measured as participants searched for targets among distractors. Participants were informed that singleton distractors indicated the magnitude of a potential gain/loss available in a trial. Two visual search conditions were included to manipulate ongoing cognitive demands and isolate reward-related pupillary responses. Although the optimal strategy was to perform quickly and accurately, participants were slower and less accurate in high magnitude trials. The data suggest that attention is automatically captured by potential loss, even when it is counterintuitive to current task goals. Regarding a pupillary response, patterns of pupil size were inconsistent with our predictions across the visual search conditions. We hypothesized that if pupil dilation reflected a reward-related reaction, pupil size would vary as a function of both the presence of a reward and its magnitude. More so, we predicted that this pattern would be more apparent in the easier search condition (i.e., cooperation visual search), because the signal of available reward was still present, but the ongoing attentional demands were significantly reduced in comparison to the more difficult search condition (i.e., conflict visual search). In contrast to our predictions, pupil size was more closely related to ongoing cognitive demands, as opposed to affective factors, in cooperation visual search. Surprisingly, pupil size in response to signals of available reward was better explained by affective, motivational and emotional influences than ongoing cognitive demands in conflict visual search. The current research suggests that similar to recent findings involving LC-NE activity (Aston-Jones & Cohen, 2005; Bouret & Richmond, 2009), the measure of pupillometry may be used to assess more specific areas of cognition, such as motivation and perception of reward. However, additional research is needed to better understand this unexpected pattern of pupil size.
Date Created
2017
Agent

Eye movements and the label feedback effect: speaking modulates visual search, but probably not visual perception

154879-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs)

The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their results challenging to assess. This study evaluated whether self-directed speech influences target locating (i.e. attentional guidance) or target identification after location (i.e. decision time), testing whether the Label Feedback Effect reflects changes in visual attention or some other mechanism (e.g. template maintenance in working memory). Across three experiments, search RTs and eye movements were analyzed from four within-subject conditions. People spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names. Speaking target names weakly facilitates visual search, but speaking different names strongly inhibits search. The most parsimonious account is that language affects target maintenance during search, rather than visual perception.
Date Created
2016
Agent

Executive function and language control in bilinguals with a history of mild traumatic brain injury

154034-Thumbnail Image.png
Description
Adults with a history of traumatic brain injury (TBI) often show deficits in executive functioning, which include the ability to inhibit, switch, and attend to task relevant information. These abilities are also essential for language processing in bilinguals, who constantly

Adults with a history of traumatic brain injury (TBI) often show deficits in executive functioning, which include the ability to inhibit, switch, and attend to task relevant information. These abilities are also essential for language processing in bilinguals, who constantly inhibit and switch between languages. Currently, there is no data regarding the effect of TBI on executive function and language processing in bilinguals. This study used behavioral and eye-tracking measures to examine the effect of mild traumatic brain injury (mTBI) on executive function and language processing in Spanish-English bilinguals. In Experiment 1, thirty-nine healthy bilinguals completed a variety of executive function and language processing tasks. The primary executive function and language processing tasks were paired with a cognitive load task intended to simulate mTBI. In Experiment 2, twenty-two bilinguals with a history of mTBI and twenty healthy control bilinguals completed the same executive function measures and language processing tasks. The results revealed that bilinguals with a history of mTBI show deficits in specific executive functions and have higher rates of language processing deficits than healthy control bilinguals. Additionally, behavioral and eye-tracking data suggest that these language processing deficits are related to underlying executive function abilities. This study also identified a subset of bilinguals who may be at the greater risk of language processing deficits following mTBI. The findings of this study have a direct impact on the identification of executive function deficits and language processing deficits in bilinguals with a history mTBI.
Date Created
2015
Agent

Categorical contextual cueing in visual search

152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
Date Created
2014
Agent

Investigating the influence of top-down mechanisms on hemispheric asymmetries in verbal memory

152036-Thumbnail Image.png
Description
It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the

It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted to explore the influence of top-down mechanisms on hemispheric asymmetries in verbal memory. Experiments 1 and 2 used item-method directed forgetting to examine maintenance and inhibition mechanisms. In Experiment 1, participants were cued to remember or forget certain words, and cues were presented simultaneously or after the presentation of target words. In Experiment 2, participants were cued again to remember or forget words, but each word was repeated once or four times. Experiments 3 and 4 examined the influence of cognitive load on hemispheric asymmetries in true and false memory. In Experiment 3, cognitive load was imposed during memory encoding, while in Experiment 4, cognitive load was imposed during memory retrieval. Finally, Experiment 5 investigated the association between controlled processing in hemispheric semantic priming, and top-down mechanisms used for hemispheric verbal memory. Across all experiments, divided visual field presentation was used to probe verbal memory in the cerebral hemispheres. Results from all experiments revealed several important findings. First, top-down mechanisms used by the LH primarily used to facilitate verbal processing, but also operate in a domain general manner in the face of increasing processing demands. Second, evidence indicates that the RH uses top-down mechanisms minimally, and processes verbal information in a more bottom-up manner. These data help clarify the nature of top-down mechanisms used in hemispheric memory and language processing, and build upon current theories that attempt to explain hemispheric asymmetries in language processing.
Date Created
2013
Agent

Target "templates: how the precision of mental representations affects attentional guidance and decision-making in visual search

152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
Date Created
2013
Agent

Source memory revealed through eye movements and pupil dilation

150716-Thumbnail Image.png
Description
Current theoretical debate, crossing the bounds of memory theory and mental imagery, surrounds the role of eye movements in successful encoding and retrieval. Although the eyes have been shown to revisit previously-viewed locations during retrieval, the functional role of these

Current theoretical debate, crossing the bounds of memory theory and mental imagery, surrounds the role of eye movements in successful encoding and retrieval. Although the eyes have been shown to revisit previously-viewed locations during retrieval, the functional role of these saccades is not known. Understanding the potential role of eye movements may help address classic questions in recognition memory. Specifically, are episodic traces rich and detailed, characterized by a single strength-driven recognition process, or are they better described by two separate processes, one for vague information and one for the retrieval of detail? Three experiments are reported, in which participants encoded audio-visual information while completing controlled patterns of eye movements. By presenting information in four sources (i.e., voices), assessments of specific and partial source memory were measured at retrieval. Across experiments, participants' eye movements at test were manipulated. Experiment 1 allowed free viewing, Experiment 2 required externally-cued fixations to previously-relevant (or irrelevant) screen locations, and Experiment 3 required externally-cued new or familiar oculomotor patterns to multiple screen locations in succession. Although eye movements were spontaneously reinstated when gaze was unconstrained during retrieval (Experiment 1), externally-cueing participants to re-engage in fixations or oculomotor patterns from encoding (Experiments 2 and 3) did not enhance retrieval. Across all experiments, participants' memories were well-described by signal-detection models of memory. Source retrieval was characterized by a continuous process, with evidence that source retrieval occurred following item memory failures, and additional evidence that participants partially recollected source, in the absence of specific item retrieval. Pupillometry provided an unbiased metric by which to compute receiver operating characteristic (ROC) curves, which were consistently curvilinear (but linear in z-space), supporting signal-detection predictions over those from dual-process theories. Implications for theoretical views of memory representations are discussed.
Date Created
2012
Agent