Wednesday, November 28, 2007
Seminar Announcement: Tatiana Pasternak
Prof. Tatiana Pasternak
Department of Neurobiology and Anatomy
University of Rochester Medical Center
Remembering Visual Motion: Cortical Mechanisms
If you are interested in meeting with Prof. Pasternak during her visit, please contact Jayne at jayne.lee@uci.edu.
Date: Monday, December 3rd
Time: 4:00 pm
Place: SSPA 2112
Abstract:
The work in my lab is aimed at examining the circuitry sub-serving behavioral tasks requiring processing and retaining sensory information. The focus is on the link between cortical areas traditionally associated with processing of visual motion and regions identified with cognitive control of visually guided behaviors. In this talk I will focus on two cortical regions, motion-processing area MT and on prefrontal cortex (PFC) strongly associated with cognitive control. I will present evidence that neurons in both areas actively participate in tasks requiring discrimination, retention and retrieval/comparison of visual motion, and thus, likely to place demands on both sets of neurons. We recorded activity of neurons in MT and in PFC while the monkeys compared the direction or speed of two random-dot stimuli, sample and test, separated by a memory delay. Many PFC neurons showed robust direction selective and speed selective responses to behaviorally relevant motion, most likely originating in MT. Although during the memory delay there were reliable DS signals in both areas, these signals were largely transient, suggesting that individual neurons in neither area carry sustained memory related signals. Such signals, however, appear to be represented at the population level, with 10-30% of neurons representing remembered motion throughout the memory delay. In both areas responses to the test reflect access to the preceding sample direction or speed. This activity arose earlier in MT, suggesting a role in the comparison to the remembered direction. Only in PFC this effect was predictive of the forthcoming decision. These results illustrate unique contributions of MT and PFC neurons to the task. PFC neurons faithfully reflect task-related information about visual motion, and represent decisions that may be based, in part, on MT's comparison between the remembered sample and test.
Relevant papers:
Zaksas D, Pasternak T (2006) Directional Signals in the Prefrontal Cortex and in Area MT during a Working Memory for Visual Motion Task. J Neuroscience 26:11726-11742.
Pasternak T, Greenlee M (2005) Working Memory in Primate Sensory Systems. Nature Reviews Neuroscience 6:97-107.
Friday, November 16, 2007
News Flash: Perceived animacy does not predict neural activity on the posterior STS
"Visual Perception and Neural Correlates of Novel 'Biological Motion' "
John Pyles, Javier Garcia, Don Hoffman & Emily Grossman
Published in Vision Research, September 2007 issue
The human superior temporal sulcus (STS) has been identified as involved in a number of abilities, including visual perception, auditory scene analysis, multisensory integration, attention, and the understanding of social events. To put it mildly, this is a relatively large and complex region of the brain.
Perceived animacy and perception of biological motion (such as in point-light animations) have both been identified as recruiting neural activity on the posterior extent of the STS. Virtually all studies in both of these areas have used human actions as their stimuli, which confounds visual analysis of the actor with portrayed animacy (virtually guaranteed from these sequences).
John Pyles led this study that measures neural activity using an novel stimulus set. These 'Creatures' are artificially evolved animations that have unusual body structures and gait styles, but nonetheless are readily perceived as animate beings when viewed in locomotion. Using these stimuli, John showed that the posterior STS is not activated as strongly by the Creatures as it is for human actions, suggesting that animacy alone does not predict neural activity in this region.
In contrast to the STS, a number of ventral temporal brain areas responded quite strongly for these novel 'Creatures'. John is now investigating how these brain structures support the recognition of these stimuli.
For more information, see the paper here.
John Pyles, Javier Garcia, Don Hoffman & Emily Grossman
Published in Vision Research, September 2007 issue
The human superior temporal sulcus (STS) has been identified as involved in a number of abilities, including visual perception, auditory scene analysis, multisensory integration, attention, and the understanding of social events. To put it mildly, this is a relatively large and complex region of the brain.
Perceived animacy and perception of biological motion (such as in point-light animations) have both been identified as recruiting neural activity on the posterior extent of the STS. Virtually all studies in both of these areas have used human actions as their stimuli, which confounds visual analysis of the actor with portrayed animacy (virtually guaranteed from these sequences).
John Pyles led this study that measures neural activity using an novel stimulus set. These 'Creatures' are artificially evolved animations that have unusual body structures and gait styles, but nonetheless are readily perceived as animate beings when viewed in locomotion. Using these stimuli, John showed that the posterior STS is not activated as strongly by the Creatures as it is for human actions, suggesting that animacy alone does not predict neural activity in this region.
In contrast to the STS, a number of ventral temporal brain areas responded quite strongly for these novel 'Creatures'. John is now investigating how these brain structures support the recognition of these stimuli.
For more information, see the paper here.
Thursday, November 15, 2007
Talk Announcement: Ed Awh
Dr. Ed Awh
Department of Psychology
University of Oregon, Institute of Neuroscience
Complexity, categories, and capacity in visual working memory
Monday, November 19, 2007
4:00pm
SSPA 2112
Abstract
Several paradigms have converged on a capacity limit of about 3-4
items in visual working memory. This limit exhibits robust
correlations with a broad range of intelligence measures, motivating
an interest in the basic determinants of memory capacity. For
example, performance in the change detection paradigm – a widely used
measure of capacity – declines as stimulus complexity increases. We
have found, however, that increased complexity is typically associated
with increased similarity between the potential sample and test items,
raising the possibility that change detection with complex objects was
limited by the resolution rather than by the number of items
represented in working memory. Indeed, when resolution-based errors
were minimized by reducing sample/test similarity, capacity estimates
for the most complex objects were equivalent to that for the simplest
objects (r = .84). This conclusion was also supported via
measurements of the CDA (contralateral delay activity) waveform, an
event-related potential waveform that provides an online measure of
the number of items maintained during the delay period. CDA amplitude
was equivalent for simple and complex objects. By contrast, a
separate measure of neural activity during the comparison stage of the
task was sensitive to object complexity, though uncorrelated with CDA
amplitude. Thus, visual working memory represents a fixed number of
objects, regardless of complexity. Importantly, analyses of
individual differences suggest that limits in the number and
resolution of representations in working memory represent distinct
aspects of memory ability. Finally, I will present further evidence
that this number/resolution dichotomy is useful for understanding how
perceptual expertise improves memory performance, the relationship
between memory ability and measures of fluid intelligence, and the
factors that influence mnemonic resolution in multi-item displays.
Department of Psychology
University of Oregon, Institute of Neuroscience
Complexity, categories, and capacity in visual working memory
Monday, November 19, 2007
4:00pm
SSPA 2112
Abstract
Several paradigms have converged on a capacity limit of about 3-4
items in visual working memory. This limit exhibits robust
correlations with a broad range of intelligence measures, motivating
an interest in the basic determinants of memory capacity. For
example, performance in the change detection paradigm – a widely used
measure of capacity – declines as stimulus complexity increases. We
have found, however, that increased complexity is typically associated
with increased similarity between the potential sample and test items,
raising the possibility that change detection with complex objects was
limited by the resolution rather than by the number of items
represented in working memory. Indeed, when resolution-based errors
were minimized by reducing sample/test similarity, capacity estimates
for the most complex objects were equivalent to that for the simplest
objects (r = .84). This conclusion was also supported via
measurements of the CDA (contralateral delay activity) waveform, an
event-related potential waveform that provides an online measure of
the number of items maintained during the delay period. CDA amplitude
was equivalent for simple and complex objects. By contrast, a
separate measure of neural activity during the comparison stage of the
task was sensitive to object complexity, though uncorrelated with CDA
amplitude. Thus, visual working memory represents a fixed number of
objects, regardless of complexity. Importantly, analyses of
individual differences suggest that limits in the number and
resolution of representations in working memory represent distinct
aspects of memory ability. Finally, I will present further evidence
that this number/resolution dichotomy is useful for understanding how
perceptual expertise improves memory performance, the relationship
between memory ability and measures of fluid intelligence, and the
factors that influence mnemonic resolution in multi-item displays.
UCI Talks at Psychonomics
48th Annual Meeting of the Psychonomic Society
Hyatt Regency Hotel, Long Beach California
Thursday Nov. 15th- Sunday Nov. 18th
From George Sperling:
This Friday morning from 8:00a - 10:00a you can hear six papers on attention for free at the 48th Annual Meeting of the Psychonomic Society at the Hyatt Regency Hotel in Long Beach CA, including one by Sperling, Scofield, & Hsu at 9:20a. Lot's more interesting papers to listen to during the next two days. See http://www.psychonomic.org/meet.htm
Hyatt Regency Hotel, Long Beach California
Thursday Nov. 15th- Sunday Nov. 18th
From George Sperling:
This Friday morning from 8:00a - 10:00a you can hear six papers on attention for free at the 48th Annual Meeting of the Psychonomic Society at the Hyatt Regency Hotel in Long Beach CA, including one by Sperling, Scofield, & Hsu at 9:20a. Lot's more interesting papers to listen to during the next two days. See http://www.psychonomic.org/meet.htm
Talk Announcement: Lisa Jefferies
Dr. Lisa Jefferies
University of British Columbia
Friday, Nov. 16th 10-11am
SSL 337
Title: Temporal dynamics of attentional control: Assessing the rate of "zooming"
The vast amount of visual information in the world necessitates a selective mechanism that limits processing to objects or locations of interest. Visual attention fulfils this selective function, and may be allocated with varying degrees of success over space and time. We propose a qualitative model that accounts for the modulation of the spatial extent of the focus of attention across time, and test that model in a series of experiments. Specifically, the Attentional Blink (AB) and Lag-1 sparing were employed to test the spatiotemporal modulations of attention. When two targets are presented within a stream of distractor items, identification of the second target is impaired when it follows 100-500 ms after the first target, a phenomenon known as the attentional blink (Raymond, Shapiro & Arnell, 1992). Paradoxically, the second target is sometimes identified quite accurately when it immediately follows the first target (Lag-1 sparing; Potter et al., 1998). Lag-1 sparing always occurs when the two targets appear in the same spatial location (Visser, Bischof, & Di Lollo, 1999), but occurs with spatially separated targets only when the second target falls within the focus of attention (Jefferies et al., 2007). As such, the incidence and magnitude of Lag-1 sparing with spatially separated targets can be used to index changes in the extent of the focus of attention as a function of time. In the current research, we found a progressive, linear transition from Lag-1 sparing to AB deficit as the stimulus-onset-asynchrony (SOA) between the targets was increased. This strongly suggests that the spatial extent of the focus of attention varies linearly over time and that the expanding and shrinking of the focus of attention may be analog in nature. Additional experiments in which factors such as the spatial separation between the streams and the brightness of the targets were manipulated were also conducted, and these experiments provide further tests of the model.
University of British Columbia
Friday, Nov. 16th 10-11am
SSL 337
Title: Temporal dynamics of attentional control: Assessing the rate of "zooming"
The vast amount of visual information in the world necessitates a selective mechanism that limits processing to objects or locations of interest. Visual attention fulfils this selective function, and may be allocated with varying degrees of success over space and time. We propose a qualitative model that accounts for the modulation of the spatial extent of the focus of attention across time, and test that model in a series of experiments. Specifically, the Attentional Blink (AB) and Lag-1 sparing were employed to test the spatiotemporal modulations of attention. When two targets are presented within a stream of distractor items, identification of the second target is impaired when it follows 100-500 ms after the first target, a phenomenon known as the attentional blink (Raymond, Shapiro & Arnell, 1992). Paradoxically, the second target is sometimes identified quite accurately when it immediately follows the first target (Lag-1 sparing; Potter et al., 1998). Lag-1 sparing always occurs when the two targets appear in the same spatial location (Visser, Bischof, & Di Lollo, 1999), but occurs with spatially separated targets only when the second target falls within the focus of attention (Jefferies et al., 2007). As such, the incidence and magnitude of Lag-1 sparing with spatially separated targets can be used to index changes in the extent of the focus of attention as a function of time. In the current research, we found a progressive, linear transition from Lag-1 sparing to AB deficit as the stimulus-onset-asynchrony (SOA) between the targets was increased. This strongly suggests that the spatial extent of the focus of attention varies linearly over time and that the expanding and shrinking of the focus of attention may be analog in nature. Additional experiments in which factors such as the spatial separation between the streams and the brightness of the targets were manipulated were also conducted, and these experiments provide further tests of the model.
Tuesday, November 6, 2007
Talk Announcement: Chi-Hung Juan
Dr. Chi-Hung Juan
Institute of Cognitive Neuroscience
National Central University, Taiwan
"Probing temporal and causal involvements of frontal eye fields in visual selection and saccade preparation with microstimulation and TMS."
Friday, Nov. 9th
12-1pm
SSPB 2214
The premotor theory of attention suggests that target processing and generation of a saccade to the target are interdependent. Temporally precise microstimulation and transcranial magnetic stimulation (TMS) was delivered over the monkey and human frontal eye fields, the area most frequently associated with the premotor theory in association with eye movements, while subjects performed a visually instructed pro-/anti-saccade task. Visual analysis and saccade preparation were clearly separated in time, as indicated by distinct time points of microstimulation and TMS delivery which resulted in elevated saccadic deviation and latencies. These results show that that visual analysis and saccade preparation can be dissociated temporally in the brain, notwithstanding any potential overlap of the anatomical areas involved.
Institute of Cognitive Neuroscience
National Central University, Taiwan
"Probing temporal and causal involvements of frontal eye fields in visual selection and saccade preparation with microstimulation and TMS."
Friday, Nov. 9th
12-1pm
SSPB 2214
The premotor theory of attention suggests that target processing and generation of a saccade to the target are interdependent. Temporally precise microstimulation and transcranial magnetic stimulation (TMS) was delivered over the monkey and human frontal eye fields, the area most frequently associated with the premotor theory in association with eye movements, while subjects performed a visually instructed pro-/anti-saccade task. Visual analysis and saccade preparation were clearly separated in time, as indicated by distinct time points of microstimulation and TMS delivery which resulted in elevated saccadic deviation and latencies. These results show that that visual analysis and saccade preparation can be dissociated temporally in the brain, notwithstanding any potential overlap of the anatomical areas involved.
Subscribe to:
Posts (Atom)