Wednesday, December 19, 2007

Prof Little Helper

Ever need a little pick me up? I certainly do, and in fact, I’m sitting here drinking a cup of coffee as I write. In the wake of the steroids scandal in baseball and the increasing prevalence of drugs in the classroom (e.g. for ADD), an important debate is brewing on the role of pharmaceuticals in our daily life. A recent short article in Nature by Barbara Sahakian and Sharon Morein-Zamir outlines several issues related to the increasing use of stimulants to improve memory and alertness. The most striking point is that when we talk about taking a pill to achieve a cognitive benefit, it seems somehow different from drinking a cup of espresso. Interesting…

At any rate, this article is worth checking out because it calls attention to the blurry line between ‘cheating’ and taking a reasonable step to ensure maximum cognitive capacity. Clearly defining this line is critical as the next generation of school children comes of age in an era of widespread standardized testing: who can afford to fall a little behind or go at their own pace anymore?

Tuesday, December 18, 2007

CCNS faculty blog article on Scientific American

Scientific American runs an interesting blog called Mind Matters. It contains news and comments on neuroscience and psychology, with many entries being written by researchers in the field. The most recent entry on mirror neurons was written by CCNS director, Greg Hickok. Click here to check it out!

http://science-community.sciam.com/thread.jspa?threadID=300005636

Friday, December 14, 2007

Congratulations, Kevin!

Congratulations to Dr. Kevin Smith, PhD for successfully defending his thesis "Is there an auditory "where" stream?" Great work!

Thursday, December 6, 2007

Interesting upcoming meeting on vision and memory

COGNITIVE NEUROSCIENCE OF VISUAL KNOWLEDGE: WHERE VISION MEETS MEMORY

Sponsored by the American Psychological Association, Tufts University, and the Charles River Association for Memory
Dates: Thurs, May 29 - Sat, May 31, 2008
Location: Tufts University in Medford, MA

March 31, 2008: Deadline for early registration

How can people interact appropriately with and understand the world they see around them? Research suggests that prior knowledge about the world influences visual perception at both conscious and non-conscious levels. Emerging research on the neural basis of visual knowledge has begun to synthesize ideas from vision and learning and memory fields.
A group of twelve speakers has been carefully selected from the fields of Cognitive Neuroscience, Cognitive Psychology, Neurobiology, and Computational Modeling to discuss vision and memory, two important fields of Psychology that have proceeded largely in parallel. The goal of the conference is to enable interactions among cognitive psychologists, cognitive neuroscientists, and computational modelers who study the neural basis of vision and memory in humans and animals and who develop theories of visual knowledge through modeling. This conference will serve to facilitate not only the cross-pollination of ideas among scientists in each field but also to promote the emergence of a new field of visual knowledge that incorporates key ideas from these established research domains. For more information about this conference, and to register, please go to http://ase.tufts.edu/psychology/conference/

Wednesday, December 5, 2007

Modularity of perception

Well over 100 years ago, scientists realized that damage to specific brain regions resulted in specific behavioral deficits. For example, damage to left frontal cortex is often associated with impaired language abilities. Observations of this kind led to the hypothesis that each chunk of cortex performs a specific task (often referred to as the ‘modularity’ hypothesis). In contrast, others suggested that different regions of the cortex are not specialized at all – that all regions participate in all aspects of cognition. Time has taught us that both of these extreme views are probably incorrect. We would quickly run out of space in our head if we dedicated a chunk of cortex to each task that we needed to perform. On the other hand, given the knowledge that damage to certain brain regions leads to very specific behavioral deficits, we must acknowledge that some specialization occurs.

How do we reconcile these two points of view? Recently, we investigated the issue of modularity using functional magnetic resonance imaging (or ‘fMRI’), a method that allows us to indirectly measure neural activity in humans (see article linked below). We focused on visual information processing, since we know quite a bit about the parts of the brain that are responsible for sight. Light comes into the eye, where it is converted into a series of electrical impulses by the retina (a process called ‘transduction’). These electrical impulses are then passed from neuron to neuron until they reach a region of cortex at the very back of the head that is referred to as V1. In V1, neurons respond to simple features in the environment, such as the orientation of edges and different colors. Neurons in V1 then pass along information to other visual areas for further processing. There are actually more than 30 visual areas that are involved in the process of analyzing visual inputs, and each one seems to contribute some unique bit of information to support perception. For example, area V4 – which is a few steps up the hierarchy from V1 – registers information about simple shapes, area MT registers the direction of moving objects, and some later regions register the identity of objects (such as faces).

On the surface, this functional specialization seems to support the ‘modular’ account of brain organization; however, no single visual area can support perception without working in concert with other areas. To give an extreme example, suppose the visual system has a module that only processes information about color. Now, suppose someone suffered damage to their eyes and could no longer transduce light coming into their retina. Obviously, this person wouldn’t be able to perceive colors, even though the color module was perfectly intact. Thus, functionally specialized brain regions cannot operate in isolation; some regions convert light into neural activity, some supply information about edge orientations, some about color, some about motion, and so on. Eventually this information is combined to create a coherent perceptual representation of the surrounding environment.

Even though no single area in isolation can give rise to perception, all areas are clearly not created equal. For example, in our study we examined brain activity in area MT while people watched videos of moving objects. Obviously, the ability of MT neurons to respond to motion depends on input provided by the eyes and by earlier visual areas. However, our experiment found that decisions about the perceived direction of motion are based primarily on the activity of MT neurons, even though activity in other areas is necessary to achieve the final overall percept. According to this account, modularity arises primarily when we need to make a judgment about some attribute of our environment. If we need to know about motion, we query the activity of neurons in MT, if we need to know about color, we might query the activity of neurons in V4, and so on. This viewpoint suggests that most cognitive operations rely on neural activity in a series of distinct cortical areas; however, the ultimate output of the process may be largely mediated by a single specialized area of the brain. One important future challenge will be to determine how more complex cognitive operations (beyond judging the direction of a moving object) are carried out and represented in cortex, and if the same organizational principles apply.

Paper

Monday, December 3, 2007

Left-Brain/Right-Brain: Wrong-Minded

There’s no shortage of left-brain/right-brain propaganda in pop culture. Browse the shelves at your local bookstore and you’ll find titles like Daniel Pink’s A Whole New Mind: Why Right-Brainers Will Rule the Future; peruse the booming educational toy market and you’ll find products like Brainy Baby’s “Left Brain” and “Right Brain” DVDs; or just open your ears to the ramblings of a motivational speaker or the chit chat of the office break room, and you’re likely hear some reference to left- and right-brain tendencies. In fact, a quick Google search turns up tens of thousands of documents touting the concept’s usefulness for everything from improving elementary school education or business management strategies, to understanding biblical symbolism, or why men are, and I quote, “beer-guzzling, TV-glued, [and] sex-driven.” You can even test your own left-brain/right-brain tendencies with a multitude of online personality tests. But don’t waste your clicks: if you tend to focus on artistic/holistic/spatial aspects of things you will be labeled “right-brained,” whereas if you lean towards logical/detail-oriented/sequential features, you are “left-brained.” Given its pervasiveness, you may be surprised to learn that most cognitive neuroscientists – scientists who study the relation between mind and brain – cringe when they hear popular references to left- versus right-brain function. To us, hearing someone say, “let’s learn to think with our right hemispheres…” is about as stimulating as fingernails on a chalkboard.

But, you say, isn’t it true that the brain is divided into two hemispheres? Yes it is. And isn’t it the case that the two sides are not identical in function? Yes, of course. Then what’s the problem with all this left-brain/right-brain stuff? Well, let me illustrate by example. Suppose I told you I could read your personality strengths and weaknesses – your self esteem, cautiousness, wit, secretiveness, destructiveness, and so on – simply by measuring the bumps and indentations on your skull. Perhaps you’d be interested in me examining your fiancĂ©e (or wish I had before you tied the knot), but more likely you’d think I was blowing smoke. And you’d be right. What I’ve described, in fact, is the 19th century doctrine of phrenology, which held that different brain areas controlled different personality traits, which could be more or less developed. A well-developed trait would command more neural bulk and therefore press on the skull to produce a measurable bump on the head; vise versa for under-developed traits. As ridiculous as it sounds today, phrenology was all the rage in 19th century popular culture, even making its way into political, management, and yes, marriage decisions.

What’s interesting, though, is that while the application of phrenology was seriously misguided, the underlying science was quite legitimate, even accurate in important respects. Indeed, just replace self-esteem, cautiousness, and wit, with motor control, speech, and memory and suddenly phrenology doesn’t seem so ridiculous (well except for that bump-on-the-head thing, but that’s not the core of the theory). In fact, that is precisely what many scientists of the time did with the idea: they ran with the fundamental concept, and ditched the personality trait lunacy.

So what does phrenology have to do with the current left-brain/right-brain mentality? In short, everything. Just like phrenology, the left-right craze is based on a fundamental scientific observation, namely that the two hemispheres are not identical in function, and just like phrenology, the concept has been seriously overblown and misapplied. The fact is, with few exceptions, just about any function or ability you can imagine involves a host of coordinated brain circuits in both hemispheres. The two sides may make somewhat different contributions to these abilities, but these differences generally pale in comparison to differences in function we see between networks within the hemispheres, such as those networks that support visual recognition versus those that enable language comprehension.

The parallels between popular left-brain/right brain dichotomies and phrenology run even deeper, though, as both concepts are based on a more fundamental misconception about brain organization, namely that complex functions are carried out by circumscribed islands of brain tissue. Look at just about any map of brain function and you will find tidy parcellations, like cuts of beef, with labels such as “language,” “memory,” “vision,” and “thought.” But this drastically oversimplifies the picture. For example, there is no “language area.” Instead our ability to use language is supported by a coordinated and widely distributed network of circuits spanning many regions in both hemispheres. These circuits may be individually specialized in function, to be sure, but it is the integrated action of the network that gives rise to our capacity for language. Furthermore, some of these circuits are not slaves to a linguistic taskmaster, but participate in other abilities as well. The same holds true of other functions.

So to say that the left-brain does one thing, and the right-brain does another, is a throw-back to phrenology (and a clumsy one at that!), that fails to recognize the more dynamic, interactive, network-based organization of brain function. So get with the network. Left-brain/right-brain is so 19th century!

Wednesday, November 28, 2007

Seminar Announcement: Tatiana Pasternak



Prof. Tatiana Pasternak
Department of Neurobiology and Anatomy
University of Rochester Medical Center

Remembering Visual Motion: Cortical Mechanisms

If you are interested in meeting with Prof. Pasternak during her visit, please contact Jayne at jayne.lee@uci.edu.

Date: Monday, December 3rd
Time: 4:00 pm
Place: SSPA 2112

Abstract:

The work in my lab is aimed at examining the circuitry sub-serving behavioral tasks requiring processing and retaining sensory information. The focus is on the link between cortical areas traditionally associated with processing of visual motion and regions identified with cognitive control of visually guided behaviors. In this talk I will focus on two cortical regions, motion-processing area MT and on prefrontal cortex (PFC) strongly associated with cognitive control. I will present evidence that neurons in both areas actively participate in tasks requiring discrimination, retention and retrieval/comparison of visual motion, and thus, likely to place demands on both sets of neurons. We recorded activity of neurons in MT and in PFC while the monkeys compared the direction or speed of two random-dot stimuli, sample and test, separated by a memory delay. Many PFC neurons showed robust direction selective and speed selective responses to behaviorally relevant motion, most likely originating in MT. Although during the memory delay there were reliable DS signals in both areas, these signals were largely transient, suggesting that individual neurons in neither area carry sustained memory related signals. Such signals, however, appear to be represented at the population level, with 10-30% of neurons representing remembered motion throughout the memory delay. In both areas responses to the test reflect access to the preceding sample direction or speed. This activity arose earlier in MT, suggesting a role in the comparison to the remembered direction. Only in PFC this effect was predictive of the forthcoming decision. These results illustrate unique contributions of MT and PFC neurons to the task. PFC neurons faithfully reflect task-related information about visual motion, and represent decisions that may be based, in part, on MT's comparison between the remembered sample and test.

Relevant papers:

Zaksas D, Pasternak T (2006) Directional Signals in the Prefrontal Cortex and in Area MT during a Working Memory for Visual Motion Task. J Neuroscience 26:11726-11742.

Pasternak T, Greenlee M (2005) Working Memory in Primate Sensory Systems. Nature Reviews Neuroscience 6:97-107.

Friday, November 16, 2007

News Flash: Perceived animacy does not predict neural activity on the posterior STS

"Visual Perception and Neural Correlates of Novel 'Biological Motion' "
John Pyles, Javier Garcia, Don Hoffman & Emily Grossman
Published in Vision Research, September 2007 issue

The human superior temporal sulcus (STS) has been identified as involved in a number of abilities, including visual perception, auditory scene analysis, multisensory integration, attention, and the understanding of social events. To put it mildly, this is a relatively large and complex region of the brain.

Perceived animacy and perception of biological motion (such as in point-light animations) have both been identified as recruiting neural activity on the posterior extent of the STS. Virtually all studies in both of these areas have used human actions as their stimuli, which confounds visual analysis of the actor with portrayed animacy (virtually guaranteed from these sequences).

John Pyles led this study that measures neural activity using an novel stimulus set. These 'Creatures' are artificially evolved animations that have unusual body structures and gait styles, but nonetheless are readily perceived as animate beings when viewed in locomotion. Using these stimuli, John showed that the posterior STS is not activated as strongly by the Creatures as it is for human actions, suggesting that animacy alone does not predict neural activity in this region.

In contrast to the STS, a number of ventral temporal brain areas responded quite strongly for these novel 'Creatures'. John is now investigating how these brain structures support the recognition of these stimuli.

For more information, see the paper here.

Thursday, November 15, 2007

Talk Announcement: Ed Awh

Dr. Ed Awh
Department of Psychology
University of Oregon, Institute of Neuroscience

Complexity, categories, and capacity in visual working memory

Monday, November 19, 2007
4:00pm
SSPA 2112

Abstract
Several paradigms have converged on a capacity limit of about 3-4
items in visual working memory. This limit exhibits robust
correlations with a broad range of intelligence measures, motivating
an interest in the basic determinants of memory capacity. For
example, performance in the change detection paradigm – a widely used
measure of capacity – declines as stimulus complexity increases. We
have found, however, that increased complexity is typically associated
with increased similarity between the potential sample and test items,
raising the possibility that change detection with complex objects was
limited by the resolution rather than by the number of items
represented in working memory. Indeed, when resolution-based errors
were minimized by reducing sample/test similarity, capacity estimates
for the most complex objects were equivalent to that for the simplest
objects (r = .84). This conclusion was also supported via
measurements of the CDA (contralateral delay activity) waveform, an
event-related potential waveform that provides an online measure of
the number of items maintained during the delay period. CDA amplitude
was equivalent for simple and complex objects. By contrast, a
separate measure of neural activity during the comparison stage of the
task was sensitive to object complexity, though uncorrelated with CDA
amplitude. Thus, visual working memory represents a fixed number of
objects, regardless of complexity. Importantly, analyses of
individual differences suggest that limits in the number and
resolution of representations in working memory represent distinct
aspects of memory ability. Finally, I will present further evidence
that this number/resolution dichotomy is useful for understanding how
perceptual expertise improves memory performance, the relationship
between memory ability and measures of fluid intelligence, and the
factors that influence mnemonic resolution in multi-item displays.

UCI Talks at Psychonomics

48th Annual Meeting of the Psychonomic Society
Hyatt Regency Hotel, Long Beach California
Thursday Nov. 15th- Sunday Nov. 18th

From George Sperling:
This Friday morning from 8:00a - 10:00a you can hear six papers on attention for free at the 48th Annual Meeting of the Psychonomic Society at the Hyatt Regency Hotel in Long Beach CA, including one by Sperling, Scofield, & Hsu at 9:20a. Lot's more interesting papers to listen to during the next two days. See http://www.psychonomic.org/meet.htm

Talk Announcement: Lisa Jefferies

Dr. Lisa Jefferies
University of British Columbia
Friday, Nov. 16th 10-11am
SSL 337

Title: Temporal dynamics of attentional control: Assessing the rate of "zooming"

The vast amount of visual information in the world necessitates a selective mechanism that limits processing to objects or locations of interest. Visual attention fulfils this selective function, and may be allocated with varying degrees of success over space and time. We propose a qualitative model that accounts for the modulation of the spatial extent of the focus of attention across time, and test that model in a series of experiments. Specifically, the Attentional Blink (AB) and Lag-1 sparing were employed to test the spatiotemporal modulations of attention. When two targets are presented within a stream of distractor items, identification of the second target is impaired when it follows 100-500 ms after the first target, a phenomenon known as the attentional blink (Raymond, Shapiro & Arnell, 1992). Paradoxically, the second target is sometimes identified quite accurately when it immediately follows the first target (Lag-1 sparing; Potter et al., 1998). Lag-1 sparing always occurs when the two targets appear in the same spatial location (Visser, Bischof, & Di Lollo, 1999), but occurs with spatially separated targets only when the second target falls within the focus of attention (Jefferies et al., 2007). As such, the incidence and magnitude of Lag-1 sparing with spatially separated targets can be used to index changes in the extent of the focus of attention as a function of time. In the current research, we found a progressive, linear transition from Lag-1 sparing to AB deficit as the stimulus-onset-asynchrony (SOA) between the targets was increased. This strongly suggests that the spatial extent of the focus of attention varies linearly over time and that the expanding and shrinking of the focus of attention may be analog in nature. Additional experiments in which factors such as the spatial separation between the streams and the brightness of the targets were manipulated were also conducted, and these experiments provide further tests of the model.

Tuesday, November 6, 2007

Talk Announcement: Chi-Hung Juan

Dr. Chi-Hung Juan
Institute of Cognitive Neuroscience
National Central University, Taiwan

"Probing temporal and causal involvements of frontal eye fields in visual selection and saccade preparation with microstimulation and TMS."
Friday, Nov. 9th
12-1pm
SSPB 2214

The premotor theory of attention suggests that target processing and generation of a saccade to the target are interdependent. Temporally precise microstimulation and transcranial magnetic stimulation (TMS) was delivered over the monkey and human frontal eye fields, the area most frequently associated with the premotor theory in association with eye movements, while subjects performed a visually instructed pro-/anti-saccade task. Visual analysis and saccade preparation were clearly separated in time, as indicated by distinct time points of microstimulation and TMS delivery which resulted in elevated saccadic deviation and latencies. These results show that that visual analysis and saccade preparation can be dissociated temporally in the brain, notwithstanding any potential overlap of the anatomical areas involved.

Tuesday, October 30, 2007

Talk Announcement: Christian Habeck

Dr. Christian Habeck
Columbia University

"Multivariate approaches to neuroimaging analysis 101"
Friday, November 2, 2007
2:00-3:00pm
SSPA 2112

Abstract
As the clinical and cognitive neurosciences mature, multivariate analysis techniques for neuroimaging data have received increasing attention since they possess some attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques: (1) multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-wise basis. Thus, their results can be more easily interpreted as a signature of neural networks. (2) Multivariate techniques also lend themselves better to prospective application of results obtained from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, but with potentially greater statistical power and better reproducibility checks. Despite these attractive features, the barrier of entry to the use of multivariate approaches has been high, preventing more widespread application in the community. We have therefore proposed a series of studies comparing multivariate approaches amongst each other and with traditional univariate approaches in didactic reports and comprehensive review papers, using simulated as well as real-world data sets.

In my presentation, I will give a simple mathematical overview of univariate and multivariate approaches. Then I‚ll present two examples of multivariate analysis applied to: (1) an fMRI data set from a study using a delayed-response task, and (2) a clinical data set from a study comparing healthy elderly participants with early Alzheimer‚s disease patients, using resting scans of regional Cerebral Blood Flow. The presentation will close with remaining challenges and directions for the future.

Wednesday, October 24, 2007

2006-2007 CCNS Event Schedule

UC Irvine, Center for Cognitive Neuroscience
Event Calendar 2007-2008

Please note: The events and times on this calendar are subject to change throughout the year.

Wednesday, October 31, Richard Wise, University College London
1-2pm SSPA 22112

Friday, November 2 Chris Habeck, Columbia University Medical Center
2-3pm SSPA 2112

Friday, November 9 Chi-Hung Juan, National Central University, Taiwan
12-1pm, location TBA

Monday, November 19 Ed Awh, University of Oregon
4-5pm SSPA 2112

Friday, November 30 Quarterly Meeting
2-3pm SSPA 2112

Monday, December 3 Tatiana Pasterak, University of Rochester Medical Center
4-5pm SSPA 2112

Friday, January 25 Lunch Seminar
12-1pm, speaker and location TBA

Friday, February 8 Quarterly Meeting
2-3pm, location TBA

Friday, March 14 David Eagleman, Baylor College of Medicine
12-1pm Herklotz

Friday, April 18 Quarterly Meeting
2-3pm, location TBA

Friday, May 30 Lunch Seminar
12-1pm Speaker and location TBA

CCNS welcomes two new faculty members: Brewer and Krichmar

CCNS welcomes new members, Alyssa Brewer and Jeffrey Krichmar!


Alyssa BrewerAlyssa Brewer
Assistant Professor, Cognitive Sciences


Brewer is a visual neuroscientist who focuses on the use of brain imaging and behavioral research to gain a better understanding of the organization and functions of the brain related to eyesight. Her research has been widely published in multiple peer-reviewed journals.

Brewer attended Stanford University where she received a B.A in comparative literature, a B.S. in biological science, a Ph.D. in neuroscience, and most recently, an M.D. from the School of Medicine at Stanford. She has spent the last two years working as a postdoctoral research associate at Stanford while finishing her M.D.

Jeffrey KrichmarJeffrey Krichmar
Assistant Professor, Cognitive Sciences


Krichmar is a computational neuroscientist whose research interests include biologically plausible models of learning and memory, the effect of neural architecture on neural function, and testing theories of the nervous system with brain-based devices that interact with the environment. Specifically, his work includes the building of robots with simulated nervous systems in order to test theoretical models of brain functions.

He received a B.S. in computer science from the University of Massachusetts at Amherst, an M.S. in computer science from George Washington University, and a Ph.D. in computational sciences and informatics from George Mason University, after which he spent 15 years as a software engineer on projects ranging from the PATRIOT Missile System at the Raytheon Corporation to Air Traffic Control for the Federal Systems Division of IBM. He later became an assistant professor at the Krasnow Institute for Advanced Study at George Mason University. Most recently, he was a senior research fellow in Theoretical Neurobiology at the Neurosciences Institute in San Diego.