Archive for the ‘Neuroimaging’ Category

The brain’s network switching stations for adaptive behavior

Friday, August 16th, 2013

I’m excited to announce that my latest scientific publication – “Multi-task connectivity reveals flexible hubs for adaptive task control” – was just published in Nature Neuroscience. The paper reports on a project I (along with my co-authors) have been working on for over a year. The goal was to use network science to better understand how human intelligence happens in the brain – specifically, our ability to rapidly adapt to new circumstances, as when learning to perform a task for the first time (e.g., how to use new technology).

The project built on our previous finding (from last year) showing that the amount of connectivity of a well-connected “hub” brain region in prefrontal cortex is linked to human intelligence. That study suggested (indirectly) that there may be hub regions that are flexible – capable of dynamically updating what brain regions they communicate with depending on the current goal.

Typical methods were not capable of more directly testing this hypothesis, however, so we took the latest functional connectivity approaches and pushed the limit, going well beyond the previous paper and what others have done in this area. The key innovation was to look at how functional connectivity changes across dozens of distinct task states (specifically, 64 tasks per participant). This allowed us to look for flexible hubs in the fronto-parietal brain network.

We found that this network contained regions that updated their global pattern of functional connectivity (i.e., inter-regional correlations) depending on which task was being performed.

In other words, the fronto-parietal network changed its brain-wide functional connectivity more than any other major brain network, and this updating appeared to code which task was being performed.

What’s the significance?

These results suggest a potential mechanism for adaptive cognitive abilities in humans:
Prefrontal and parietal cortices form a network with extensive connections projecting to other functionally specialized networks throughout the brain. Incoming instructions activate component representations – coded as neuronal ensembles with unique connectivity patterns – that produce a unique global connectivity pattern throughout the brain. Since these component representations are interchangeable it’s possible to implement combinations of instructions never seen before, allowing for rapid learning of new tasks from instructions.

Important points not mentioned or not emphasized in the journal article:

This study was highly hypothesis-driven, as it tested some predictions of our recent compositional theory of prefrontal cortex function (extended to include parietal cortex as well). That theory was first proposed earlier this year in Cole, Laurent, & Stocco (2013).

Also, as described in our online supplemental FAQ for the paper, we identified ‘adaptive task control’ flexible hubs, but there may be other kinds of flexible hubs in the brain. For instance, there may be flexible hubs for stable task control (maintaining task information via connectivity patterns over extended periods of time, only updating when necessary).

See our online supplemental FAQ for more important points that were not mentioned in the journal article. Additional information is also available from a press release from Washington University.

–MWCole

The evolutionary importance of rapid instructed task learning (RITL)

Sunday, January 23rd, 2011

We are rarely alone when learning something for the first time. We are social creatures, and whether it’s a new technology or an ancient tradition, we typically benefit from instruction when learning new tasks. This form of learning–in which a task is rapidly (within seconds) learned from instruction–can be referred to as rapid instructed task learning (RITL; pronounced “rittle”). Despite the fundamental role this kind of learning plays in our lives, it has been largely ignored by researchers until recently.

My Ph.D. dissertation investigated the evolutionary and neuroscientific basis of RITL.

RITL almost certainly played a tremendous role in shaping human evolution. The selective advantages of RITL for our species are clear: having RITL abilities allows us to partake in a giant web of knowledge shared with anyone willing to instruct us. We might have received instructions to avoid a dangerous animal we have never seen before (e.g., a large cat with a big mane), or instructions on how to make a spear and kill a lion with it. The possible scenarios in which RITL would have helped increase our chances of survival are virtually endless.

There are two basic forms of RITL. (more…)

Finding the most important brain regions

Tuesday, June 29th, 2010

When you type a search into Google it figures out the most important websites based in part on how many links each has from other websites. Taking up precious website space with a link is costly, making each additional link to a page a good indicator of importance.

We thought the same logic might apply to brain regions. Making a new brain connection (and keeping it) is metabolically and developmentally costly, suggesting that regions with many connections must be providing important enough functions to make those connections worth the sacrifice.

We developed two new metrics for quantifying the most connected—and therefore likely the most important—brain regions in a recently published study (Cole et al. (2010). Identifying the brain’s most globally connected regions, NeuroImage 49(4): 3132-3148).

We found that two large-scale brain networks were among the top 5% of globally connected regions using both metrics (see figure above). The cognitive control network (CCN) is involved in attention, working memory, decision-making and other important high-level cognitive processes (see Cole & Schneider, 2007). In contrast, the default-mode network (DMN) is typically anti-correlated with the CCN and is involved in mind-wandering, long-term memory retrieval, and self-reflection.

Needless to say, these networks have highly important roles! Without them we would have no sense of self-control (via the CCN) or even a sense of self to begin with (via the DMN).

However, there are other important functions (such as arousal, sleep regulation, breathing, etc.) that are not reflected here, most of which involve subcortical regions. These regions are known to project widely throughout the brain, so why aren’t they showing up?

It turns out that these subcortical regions only show up for one of the two metrics we used. This metric—unlike the other one—includes low-strength connections. Subcortical regions tend to be small and project weak connections all over the brain, such that only the metric including weak connections could identify them up.

I recently found out that this article received the 2010 NeuroImage Editor’s Choice Award (Methods section). I was somewhat surprised by this, since I thought there wasn’t much interest in the study. When I looked up the most popular articles at NeuroImage, however, I found out it was the 7th most downloaded article from January to May 2010. Hopefully this interest will lead to some innovative follow-ups to try to understand what makes these brain regions so special!

-MWCole

Cingulate Cortex and the Evolution of Human Uniqueness

Thursday, November 12th, 2009

Figuring out how the brain decides between two options is difficult. This is especially true for the human brain, whose activity is typically accessible only via the small and occasionally distorted window provided by new imaging technologies (such as functional MRI (fMRI)).

In contrast, it is typically more accurate to observe monkey brains since the skull can be opened and brain activity recorded directly.

Despite this, if you were to look just at the human research, you would consider it a fact that the anterior cingulate cortex (ACC) increases its activity during response conflict. The thought is that this brain region detects that you are having trouble making decisions, and signals other brain regions to pay more attention.

If you were to only look at research with monkeys, however, you would think otherwise. No research with macaque monkeys (the ‘non-human primate’ typically used in neuroscience research) has found conflict activity in ACC.

My most recent publication looks at two possible explanations for this discrepancy: 1) Differences in methods used to study these two species, and 2) Fundamental evolutionary differences between the species.

(more…)

A Meta-Meta-Analysis of Brain Functions

Friday, October 17th, 2008

Thousands of brain imaging studies are published each year. A subset of these studies are replications, or slight variations, of previous studies. Attempting to come to a solid conclusion based on the complex brain activity patterns reported by all these replications can be daunting. Meta-analysis is one tool that has been used to make sense of it all.

Meta-analyses take locations of brain activity in published scientific papers and pool them together to see if there is any consistency.

This is typically done using a standardized brain that all the studies fit their data to (e.g., Talairach). Activation coordinates are then placed on a template brain as dots. When dots tend to clump together then the author can claim some consistency is present across studies. See the first figure for an example of this kind of result.

More sophisticated ways of doing this have emerged, however. One of these advanced methods is called activation likelihood estimation (ALE). This method was developed by Peter Terkeltaub et al. (in conjunction with Jason Chein and Julie Fiez) in 2002 and extended by Laird et al. in 2005.

ALE computes the probability of each part of the brain being active across studies. This is much more powerful than simple point-plotting because it takes much of the guess-work out of deciding if a result is consistent across studies or not.

(more…)

Measuring Innate Functional Brain Connectivity

Saturday, March 29th, 2008

 Functional magnetic resonance imaging (fMRI), a method for safely measuring brain activity, has been around for about 15 years. Within the last 10 of those years a revolutionary, if mysterious, method has been developing using the technology. This method, resting state functional connectivity (rs-fcMRI), has recently gained popularity for its putative ability to measure how brain regions interact innately (outside of any particular task context).

Being able to measuring innate functional brain connectivity would allow us to know if a set of regions active during a particular task is, in fact, well connected enough generally to be considered a network. We could then predict what brain regions are likely to be active together in the future. This could, in turn, motivate us to look deeper at the nature of each brain region and how it contributes to the neuronal networks underlying our behavior.

Rs-fcMRI uses correlations of very slow fluctuations in fMRI signals (< 0.1 Hz) when participants are at rest to determine how regions are connected. The origin of these slow fluctuations has been unclear.

Some have argued that the thoughts and day dreams of participants “at rest” may explain the strong correlations typically found between brain regions. Recently, Vincent et al., 2007 sought to address this possibility using fMRI with anesthetized monkeys.

The idea is that if unconscious monkey brains show low-frequency correlated activity across known brain networks, then such findings in humans at conscious rest are likely not due to spurious thoughts, but something more innate. (more…)

The Cognitive Control Network

Sunday, October 7th, 2007

The Cognitive Control NetworkI recently published my first primary-author research study (Cole & Schneider, 2007).

The study used functional MRI to discover a network of brain regions responsible for conscious will (i.e., cognitive control). It also revealed the network’s specialized parts, which each uniquely contribute to creating the emergent property of conscious will.

I believe this research contributes substantially to our understanding of how we control our own thoughts and actions based on current goals. Much remains a mystery, but this study clearly shows the existence of a functionally integrated yet specialized network for cognitive control.

What is cognitive control? It is the set of brain processes necessary for goal-directed thought and action. Remembering a phone number before dialing requires cognitive control. Also, anything outside routine requires cognitive control (because it’s novel and/or conflicting with what you normally do). This includes, among other things, voluntarily shifting attention and making decisions.

What brain regions are involved? A mountain of evidence is accumulating that a common set of brain regions are involved in cognitive control. We looked for these regions specifically, and verified that they were active during our experiment [see top figure]. The brain regions are spread across the cortex, from the front to the back to either side. However, it’s not the whole brain: there are distinct parts that are involved in cognitive control and not other behavioral demands. (more…)

Magnetoencephalography

Monday, August 20th, 2007

MEG sensors In the dark confines behind our eyes lies flesh full of mysterious patterns, constituting our hopes, desires, knowledge, and everything else fundamental to who we are. Since at least the time of Hippocrates we have wondered about the nature of this flesh and its functions. Finally, after thousands of years of wondering we are now able to observe the mysterious patterns of the living brain, with the help of neuroimaging.

First, electroencephalography (EEG) showed us that these brain patterns have some relation in time to our behaviors. EEG showed us when things happen in the brain. More recent technologies such as functional magnetic resonance imaging (fMRI) then showed us where things happen in the brain.

It has been suggested that true insights into these brain patterns will arise when we can understand the patterns’ complex spatio-temporal nature. Thus, only with sufficient spatial and temporal resolution will we be able to decipher the mechanisms behind the brain patterns, and as a result the mechanisms behind ourselves.

Magnetoencephalography (MEG) may help to provide such insight. This method uses superconducting sensors to detect subtle changes in the magnetic fields surrounding the head. These changes reflect the patterns of neural activity as they occur in the brain. Unlike fMRI (and similar methods), MEG can measure neural activity at a very high temporal resolution (>1 kHz). In this respect it is similar to EEG. However, unlike EEG, MEG patterns are not distorted by the skull and scalp, thus providing an unprecedented level of spatio-temporal resolution for observing the neural activity underlying our selves.

Despite being around for several decades, new advances in the technology are providing unprecedented abilities to observe brain activity. Of course, the method is not perfect by any means. As always, it is a method complimentary to others, and should be used in conjunction with other noninvasive (and the occasionally invasive, where appropriate) neuroimaging methods.

MEG relies on something called a superconducting quantum interference device (SQUID). Many of these SQUIDs are built into a helmet, which is cooled with liquid helium and placed around the head. Extremely small magnetic fields created by neural activity can then be detected with these SQUIDs and recorded to a computer for later analysis.

I recently got back from a trip to Finland, where I learned  a great deal about MEG. I’m planning to use the method to observe the flow of information among brain regions during cognitive control tasks involving decision making, learning, and memory. I’m sure news of my work in this area will eventually make it onto this website.

-MC

History’s Top Brain Computation Insights: Hippocampus binds features

Monday, June 18th, 2007

Hippocampus is involved in feature binding for novel stimuli (McClelland, McNaughton, & O'Reilly – 1995, Knight – 1996, Hasselmo – 2001, Ranganath & D'Esposito – 2001)

It was demonstrated by McClelland et al. that, based on its role in episodic memory encoding, hippocampus can learn fast arbitrary association.

This was in contrast to neocortex, which they showed learns slowly in order to develop better generalizations (knowledge not tied to a single episode). This theory was able to explain why patient H.M. knew (for example) about JFK's assassination even though he lost his hippocampus in the 1950s.

Robert Knight provided evidence for a special place for novelty in hippocampal function by showing a different electrical response to novel stimuli in patients with hippocampal damage.

These two findings together suggested that the hippocampus may be important for binding the features of novel stimuli, even over short periods.

This was finally verified by Hasselmo et al. and Ranganath & D'Esposito in 2001. They used functional MRI to show that a portion of the hippocampal formation is more active during working memory delays when novel stimuli are used.

This suggests that hippocampus is not just important for long term memory. Instead, it is important for short term memory and perhaps novel perceptual binding in general.

Some recent evidence suggests that hippocampus may be important for imagining the future, possibly because binding of novel features is necessary to create a world that does not yet exist (for review see Schacter et al. 2007).

-MC

Grand Challenges of Neuroscience: Day 2

Wednesday, May 2nd, 2007

swarm-thumb.jpgTopic 2: Conflict and Cooperation

Generally, cognitive neuroscience aims to explain how mental processes such as believing, knowing, and inferring arise in the brain and affect behavior.  Two behaviors that have important effects on the survival of humans are cooperation and conflict. 

According to the NSF committee convened last year, conflict and cooperation is an important focus area for future cognitive neuroscience work.  Although research in this area has typically been the domain of psychologists, it seems that the time is ripe to apply findings from neuroscience to ground psychological theories in the underlying biology.

Neuroscience has produced a large amount of information about the brain regions that are relevant to social interactions.  For example, the amygdala has been shown to be involved in strong emotional responses.  The "mirror" neuron system in the frontal lobe allows us to put ourselves in someone else's shoes by allowing us to understand their actions as though they were our own.  Finally, the superior temporal gyrus and orbitofrontal cortex, normally involved in language and reward respectively, have also been shown to be involved in social behaviors.

Experiments?

The committee has left it up to us to come up with a way to study these phenomena! How can we study conflict and cooperation from cognitive neuroscience perspective?

At least two general approaches come to mind. The first is fMRI studies in which social interactions are simulated (or carried out remotely) over a computer link to the experiment participant.  A range of studies of this sort have recently begun to appear investigating trust and decision-making in social contexts.

The second general approach that comes to mind is that of  using neurocomputational simulations of simple acting organisms with common or differing goals.  Over the past few years, researchers have been carrying out studies with multiple interacting "agents" that "learn" through the method of Reinforcement Learning. 

Reinforcement Learning is an artificial intelligence algorithm which allows "agents" to develop behaviors through trial-and-error in an attempt to meet some goal which provides reward in the form of positive numbers.  Each agent is defined as a small program with state (e.g., location, sensory input) and a memory or "value function" which can keep track  of how much numerical reward it expects to obtain by choosing a possible action.

Although normally thought to be of interest only to computer scientists, Reinforcement Learning has recently attracted the attention of cognitive neuroscientists because of emerging evidence that something like it might be used in the brain.

By providing these agents with a goal that can only be achieved through some measure of coorperation or under some pressure, issues of conflict and coorperation can by studied in a perfectly controlled computer simulation environment.

-PL 

History’s Top Brain Computation Insights: Day 26

Friday, April 27th, 2007

The fusiform face area and the extrastriate body area 26) Some complex object categories, such as faces, have dedicated areas of cortex for processing them, but are also represented in a distributed fashion (Kanwisher – 1997, Haxby – 2001)

Early in her career Nancy Kanwisher used functional MRI (fMRI) to seek modules for perceptual and semantic processing. She was fortunate enough to discover what she termed the fusiform face area; an area of extrastriate cortex specialized for face perception.

This finding was immediately controversial. It was soon shown that other object categories also activate this area. Being the adept scientist that she is, Kanwisher showed that the area was nonetheless more active for faces than any other major object category.

Then came a slew of arguments purporting that the face area was in fact an 'expertise area'. This hypothesis states that any visual category with sufficient expertise should activate the fusiform face area.

This argument is based on findings in cognitive psychology showing that many aspects of face perception once thought to be unique are in fact due to expertise (Diamond et al., 1986). Thus, a car can show many of the same perceptual effects as faces for a car expert. The jury is still out on this issue, but it appears that there is in fact a small area in the right fusiform gyrus dedicated to face perception (see Kanwisher's evidence).

James Haxby entered the fray in 2001, showing that even after taking out the face area from his fMRI data he could predict the presence of faces based on distributed and overlapping activity patterns across visual cortex. Thus it was shown that face perception, like visual perception of other kinds of objects, is distributed across visual cortex.

Once again, Kanwisher stepped up to the plate. (more…)

History’s Top Brain Computation Insights: Day 24

Wednesday, April 25th, 2007

Cognitive control network (Cole & Schneider, 2007)24) Cognitive control processes are distributed within a network of distinct regions (Goldman-Rakic – 1988, Posner – 1990, Wager & Smith 2004, Cole & Schneider – 2007)

Researchers investigating eye movements and attention recorded from different parts of the primate brain and found several regions showing very similar neural activity. Goldman-Rakic proposed the existence of a specialized network for the control of attention.

This cortical system consists of the lateral frontal cortex (fronto-polar, dorsolateral, frontal eye fields), medial frontal cortex (anterior cingulate, pre-SMA, supplementary eye fields), and posterior parietal cortex. Subcortically, dorsomedial thalamus and superior colliculus are involved, among others.

Many computational modelers emphasize the emergence of attention from the local organization of sensory cortex (e.g., local competition). However, when a shift in attention is task-driven (i.e., top-down) then it appears that a specialized system for attentional control drives activity in sensory cortex. Many properties of attention likely arise from the organization of sensory cortex, but empirical data indicate that this is not sufficient.

With the advent of neuroimaging in humans (PET and fMRI), Posner et al. found very similar regions as those reported by Goldman-Rakic. He found that some regions are related more to orienting to stimuli, while others are related more to cognitive control (i.e., controlled processing).

After many fMRI studies of cognitive control were published, Wager et al. performed a meta-analysis looking at most of this research. They found a set of cortical regions active in nearly all cognitive control tasks.

My own work with Schneider (in press) indicates that these regions form an innate network, which is better connected than the rest of cortex on average. We used resting state correlations of fMRI BOLD activity to determine this. This cognitive control network is involved in controlled processing in that it has greater activity early in practice relative to late in practice, and has greater activity for conflicting responses (e.g., the Stroop task).

Though these regions have similar responses, they are not redundant. Our study showed that lateral prefrontal cortex is involved in maintaining relevant task information, while medial prefrontal cortex is involved in preparing and making response decisions. In most cases these two cognitive demands are invoked at the same time; only by separating them in time were we able to show specialization within the cognitive control network. We expect that other regional specializations will be found with more work.

I'll be covering my latest study in more detail once it is published (it has been accepted for publication at NeuroImage and should be published soon). The above figure is from that publication. It lists the six regions within the human cognitive control network. These regions include dorsolateral prefrontal cortex (DLPFC), inferior frontal junction (IFJ), dorsal pre-motor cortex (dPMC), anterior cingulate / pre-supplementary motor area (ACC/pSMA), anterior insula cortex (AIC), and posterior parietal cortex (PPC).

A general computational insight arising from this work (starting with Goldman-Rakic) is that cortex is composed of specialized regions that form specialized networks. This new paradigm for viewing brain function weds the old warring concepts of localized specialization and distributed function.

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

Human Versus Non-Human Neuroscience

Saturday, March 24th, 2007

Most neuroscientists don't use human subjects, and many tend to forget this important point: 
All neuroscience with non-human subjects is theoretical.

If the brain of a mouse is understood in exquisite detail, it is only relevant (outside veterinary medicine) in so far as it is relevant to human brains.

Similarly, if a computational model can illustrate an algorithm for storing knowledge in distributed units, it is only as relevant as it is similar to how humans store knowledge.

It follows from this point that there is a certain amount of uncertainty involved in any non-human research. An experiment can be brilliantly executed, but does it apply to humans?

Circumventing this uncertainty problem by looking directly at humans, another issue arises:  Only non-invasive techniques can be used with humans, and those techniques tend to involve the most uncertainty.

For instance, fMRI is a non-invasive technique that can be used to measure brain processes in humans. However, it measures the oxygenation levels, which is only indirectly related to neural activity. Thus, unlike with animal models, measures of neuronal activity are surrounded by an extra layer of uncertainty in humans.

So, if you're a neuroscientist you have to "choose your poison": Either deal with the uncertainty of relevance to humans, or deal with the uncertainty of the processes underlying the measurable signals in humans.
(more…)

Predicting Intentions: Implications for Free Will

Thursday, March 8th, 2007

News about a neuroimaging group's attempts to predict intentions hit the wire a few days ago. The major theme was how mindreading might be used for unethical purposes.

What about its more profound implications?

If your intentions can be predicted before you've even made a conscious decision, then your will must be determined by brain processes beyond your control. There cannot be complete freedom of will if I can predict your decisions before you do!

Dr. Haynes, the researcher behind this work, spoke at Carnegie-Mellon University last October. He explained that he could use functional MRI to determine what participants were going to decide several seconds before that decision was consciously made. This was a free choice task, in which the instruction was to press a button whenever the participant wanted. In a separate experiment the group could predict if a participant was going to add or subtract two numbers.

In a way, this is not very surprising. In order to make a conscious decision we must be motivated by either external or internal factors. Otherwise our decisions would just be random, or boringly consistent. Decisions in a free choice task are likely driven by a motivation to move (a basic instinct likely originating in the globus pallidus) and to keep responses spaced within a certain time window.

Would we have a coherent will if it couldn't be predicted by brain activity? It seems unlikely, since the conscious will must use information from some source in order to make a reasoned decision. Otherwise we would be incoherent, random beings with no reasoning behind our actions.

In other words, we must be fated to some extent in order to make informed and motivated decisions.

-MC 

Eliminating Common Misconceptions About fMRI

Friday, February 23rd, 2007

Most researchers in neuroscience use animal models.

Though most neuroscientists are interested in understanding the human brain, they can use more invasive techniques with animal brains. In exchange for these invasive abilities they must assume that other animals are similar enough to humans that they can actually learn something about humans in the process.

Functional magnetic resonance imaging (fMRI) is a non-invasive technique for measuring changes in local blood flow (which are significantly correlated with changes in neural activity) in the brain. fMRI measures what is called the blood oxygen level-dependent (BOLD) signal. Because it is non-invasive it can be used with human subjects.

Researchers like ourselves recognize the value of animal research, especially when the behavior being investigated is similar between the studied species and humans.

However, there is at least one fundamental cognitive difference between humans and all other animals, and likely many more given the dominant position of our species.
For researchers like ourselves it is much more interesting to learn something about the human brain (the item of interest) rather than, say, the rat brain.

Why do some neuroscientists think that using fMRI to study the neural basis of cognition in humans is of little value?

Many have heard that there are issues with fMRI as a technique. There are (like any technique), but not as many as most believe.

Here are some common misconceptions about fMRI:
(more…)