Exploring the Label Feedback Effect: The Roles of Object Clarity and Relative Prevalence of Target Labels During Visual Search

157583-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, reducing response times and increasing accuracy.

The label-feedback hypothesis (Lupyan, 2007, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, reducing response times and increasing accuracy. Hebert, Goldinger, and Walenchok (under review) used a modified design to replicate and extend this finding, and concluded that speaking modulates visual search via template integrity. The current series of experiments 1) replicated the work of Hebert et al. with audio stimuli played through headphones instead of self-directed speech, 2) examined the label feedback effect under conditions of varying object clarity, and 3) explored whether the relative prevalence of a target’s audio label might modulate the label feedback effect (as in the low prevalence effect; Wolfe, Horowitz, & Kenner, 2005). Paradigms utilized both traditional spatial visual search and repeated serial visual presentation (RSVP). Results substantiated those found in previous studies—hearing target names improved performance, even (and sometimes especially) when conditions were difficult or noisy, and the relative prevalence of a target’s audio label strongly impacted its perception. The mechanisms of the label feedback effect––namely, priming and target template integrity––are explored.
Date Created
2019
Agent

Divided attention selectively impairs value-directed encoding

157042-Thumbnail Image.png
Description
The present study examined the effect of value-directed encoding on recognition memory and how various divided attention tasks at encoding alter value-directed remembering. In the first experiment, participants encoded words that were assigned either high or low point values in

The present study examined the effect of value-directed encoding on recognition memory and how various divided attention tasks at encoding alter value-directed remembering. In the first experiment, participants encoded words that were assigned either high or low point values in multiple study-test phases. The points corresponded to the value the participants could earn by successfully recognizing the words in an upcoming recognition memory task. Importantly, participants were instructed that their goal was to maximize their score in this memory task. The second experiment was modified such that while studying the words participants simultaneously completed a divided attention task (either articulatory suppression or random number generation). The third experiment used a non-verbal tone detection divided attention task (easy or difficult versions). Subjective states of recollection (i.e., “Remember”) and familiarity (i.e., “Know”) were assessed at retrieval in all experiments. In Experiment 1, high value words were recognized more effectively than low value words, and this difference was primarily driven by increases in “Remember” responses with no difference in “Know” responses. In Experiment 2, the pattern of subjective judgment results from the articulatory suppression condition replicated Experiment 1. However, in the random number generation condition, the effect of value on recognition memory was lost. This same pattern of results was found in Experiment 3 which implemented a different variant of the divided attention task. Overall, these data suggest that executive processes are used when encoding valuable information and that value-directed improvements to memory are not merely the result of differential rehearsal.
Date Created
2019
Agent

Investigating the Relationship Between Visual Confirmation Bias and the Low-Prevalence Effect in Visual Search

156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
Date Created
2018
Agent

Perturbations in The Arrow of Time: Computational and Procedural Dissociations of Timing and Non-Timing Processes

156831-Thumbnail Image.png
Description
Timing performance is sensitive to fluctuations in time and motivation, thus interval timing and motivation are either inseparable or conflated processes. A behavioral systems model (e.g., Timberlake, 2000) of timing performance (Chapter 1) suggests that timing performance in externally-initiated (EI)

Timing performance is sensitive to fluctuations in time and motivation, thus interval timing and motivation are either inseparable or conflated processes. A behavioral systems model (e.g., Timberlake, 2000) of timing performance (Chapter 1) suggests that timing performance in externally-initiated (EI) procedures conflates behavioral modes differentially sensitive to motivation, but that response-initiated (RI) procedures potentially dissociate these behavioral modes. That is, timing performance in RI procedures is expected to not conflate these behavioral modes. According to the discriminative RI hypothesis, as initiating-responses become progressively discriminable from target responses, initiating-responses increasingly dissociate interval timing and motivation. Rats were trained in timing procedures in which a switch from a Short to a Long interval indexes timing performance (a latency-to-switch, LTS), and were then challenged with pre-feeding and extinction probes. In experiments 1 (Chapter 2) and 2 (Chapter 3), discriminability of initiating-responses was varied as a function of time, location, and form for rats trained in a switch-timing procedure. In experiment 3 (Chapter 4), the generalizability of the discriminative RI hypothesis was evaluated in rats trained in a temporal bisection procedure. In experiment 3, but not 1 and 2, RI enhanced temporal control of LTSs relative to EI. In experiments 1 and 2, the robustness of LTS medians to pre-feeding but not extinction increased with the discriminability of initiating-responses from target responses. In experiment 3, the mean LTS was robust to pre-feeding in EI and RI. In all three experiments, pre-feeding increased LTS variability in EI and RI. These results provide moderate support for the discriminative RI hypothesis, indicating that initiating-responses selectively and partially dissociate interval timing and motivation processes. Implications for the study of cognition and motivation processes are discussed (Chapter 5).
Date Created
2018
Agent

Isolating Neural Reward-Related Responses via Pupillometry

156027-Thumbnail Image.png
Description
Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential

Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential neural signal of reward remain. Consequently, this study used pupillometry to investigate how reward-related stimuli affect pupil size and attention. Across three experiments, response time, accuracy, and pupil were measured as participants searched for targets among distractors. Participants were informed that singleton distractors indicated the magnitude of a potential gain/loss available in a trial. Two visual search conditions were included to manipulate ongoing cognitive demands and isolate reward-related pupillary responses. Although the optimal strategy was to perform quickly and accurately, participants were slower and less accurate in high magnitude trials. The data suggest that attention is automatically captured by potential loss, even when it is counterintuitive to current task goals. Regarding a pupillary response, patterns of pupil size were inconsistent with our predictions across the visual search conditions. We hypothesized that if pupil dilation reflected a reward-related reaction, pupil size would vary as a function of both the presence of a reward and its magnitude. More so, we predicted that this pattern would be more apparent in the easier search condition (i.e., cooperation visual search), because the signal of available reward was still present, but the ongoing attentional demands were significantly reduced in comparison to the more difficult search condition (i.e., conflict visual search). In contrast to our predictions, pupil size was more closely related to ongoing cognitive demands, as opposed to affective factors, in cooperation visual search. Surprisingly, pupil size in response to signals of available reward was better explained by affective, motivational and emotional influences than ongoing cognitive demands in conflict visual search. The current research suggests that similar to recent findings involving LC-NE activity (Aston-Jones & Cohen, 2005; Bouret & Richmond, 2009), the measure of pupillometry may be used to assess more specific areas of cognition, such as motivation and perception of reward. However, additional research is needed to better understand this unexpected pattern of pupil size.
Date Created
2017
Agent

Eye movements and the label feedback effect: speaking modulates visual search, but probably not visual perception

154879-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs)

The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their results challenging to assess. This study evaluated whether self-directed speech influences target locating (i.e. attentional guidance) or target identification after location (i.e. decision time), testing whether the Label Feedback Effect reflects changes in visual attention or some other mechanism (e.g. template maintenance in working memory). Across three experiments, search RTs and eye movements were analyzed from four within-subject conditions. People spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names. Speaking target names weakly facilitates visual search, but speaking different names strongly inhibits search. The most parsimonious account is that language affects target maintenance during search, rather than visual perception.
Date Created
2016
Agent