Archive for the ‘Our Publications’ Category

The immune system of the mind is in frontoparietal cortex

Friday, July 25th, 2014

The frontoparietal control system is to the mind what the immune system is to the body. It may oversimplify the situation, but we’re finding it’s a useful metaphor nonetheless. Indeed, we’ve just published a new theory paper explaining that there is already an avalanche of evidence supporting this metaphor. Even though much work is left to fully establish the theory, we’re very excited about it as it appears it can explain much that is mysterious about mental illnesses. Most exciting is that the theory may unify understanding and treatment of all mental illnesses simultaneously.

Of course mental illnesses are very complex and problems in most parts of the brain can contribute. However, recent findings suggest a particular brain network may be especially important for predicting and curing mental disease: the frontoparietal control network (see yellow areas in figure). We and others have found that this network is not only highly connected to many other parts of the brain (i.e. its regions are hubs), but it also shifts that connectivity dynamically to specify the current task at hand. This means that any particular goal you are focusing on – such as solving a puzzle or finding food or cheering yourself up when you feel sad – will involve dynamic network interactions with this network (to maintain and implement that goal).

Applying this basic understanding of goal-directed thoughts and actions to mental illness, we realized deficits in this brain network may be critical for explaining many of the cognitive symptoms – such as the inability to concentrate on the task at hand – experienced across a wide variety of mental diseases. Further, we realized that many of the emotional symptoms of mental disease are indirectly affected by this network, since emotional regulation (e.g., reducing phobia-related anxiety) involves brain interactions with this network. This suggests this network may regulate symptoms and promote mental health generally, much like the body’s immune system regulates pathogens to promote physical health.

Another way to think of this is in terms of an interaction between a regulator like a thermostat and a thing to be regulated like the temperature in a room. Similar to the regulation of temperature, the frontoparietal system sets a goal to a range of distributed brain activity patterns (like setting the goal temperature on the thermostat), and the system searches for activity patterns that will make the dysfunctional brain activity patterns shift toward that goal.

As covered in our theory paper, it is well established that the frontoparietal system has all the key properties of a regulator: it maintains goal information, it has access to many other parts of the brain, and it affects distal parts of the brain according to the maintained goal. Further, there is evidence that things like emotional regulation during cognitive behavioral therapy increase activity in the frontoparietal system, suggesting this brain system is working harder when cognitive strategies are used to facilitate mental health.

Perhaps the most exciting prediction of this theory is that enhancing the frontoparietal system may reduce all symptoms of all mental diseases using a single treatment. This is because the frontoparietal system is domain general, meaning it directs goal-directed processes across all aspects of the mind and therefore all possible mental symptoms. In practice there will certainly be exceptions to this, yet simultaneous progress on reducing even just 50% of symptoms would be a tremendous advance.

How might we enhance the frontoparietal system? Perhaps using drugs that differentially influence this system (e.g., dopamine agonists) or direct stimulation of the system (e.g., using transcranial magnetic or current stimulation). Since the frontoparietal system can be reconfigured using verbal instructions, however, building on carefully structured talk therapies may be an especially specific and effective way. In particular, the frontoparietal system is known to implement rapid instructed task learning (RITL) – a way for the brain to implement novel behaviors based on instructions. Ultimately, this theory suggests the proper combination of frontoparietal system enhancement through direct influence (drugs and/or stimulation), talk therapy, and symptom-specific interventions will allow us to make major progress toward curing a wide variety of mental diseases.

MWCole

The brain’s network switching stations for adaptive behavior

Friday, August 16th, 2013

I’m excited to announce that my latest scientific publication – “Multi-task connectivity reveals flexible hubs for adaptive task control” – was just published in Nature Neuroscience. The paper reports on a project I (along with my co-authors) have been working on for over a year. The goal was to use network science to better understand how human intelligence happens in the brain – specifically, our ability to rapidly adapt to new circumstances, as when learning to perform a task for the first time (e.g., how to use new technology).

The project built on our previous finding (from last year) showing that the amount of connectivity of a well-connected “hub” brain region in prefrontal cortex is linked to human intelligence. That study suggested (indirectly) that there may be hub regions that are flexible – capable of dynamically updating what brain regions they communicate with depending on the current goal.

Typical methods were not capable of more directly testing this hypothesis, however, so we took the latest functional connectivity approaches and pushed the limit, going well beyond the previous paper and what others have done in this area. The key innovation was to look at how functional connectivity changes across dozens of distinct task states (specifically, 64 tasks per participant). This allowed us to look for flexible hubs in the fronto-parietal brain network.

We found that this network contained regions that updated their global pattern of functional connectivity (i.e., inter-regional correlations) depending on which task was being performed.

In other words, the fronto-parietal network changed its brain-wide functional connectivity more than any other major brain network, and this updating appeared to code which task was being performed.

What’s the significance?

These results suggest a potential mechanism for adaptive cognitive abilities in humans:
Prefrontal and parietal cortices form a network with extensive connections projecting to other functionally specialized networks throughout the brain. Incoming instructions activate component representations – coded as neuronal ensembles with unique connectivity patterns – that produce a unique global connectivity pattern throughout the brain. Since these component representations are interchangeable it’s possible to implement combinations of instructions never seen before, allowing for rapid learning of new tasks from instructions.

Important points not mentioned or not emphasized in the journal article:

This study was highly hypothesis-driven, as it tested some predictions of our recent compositional theory of prefrontal cortex function (extended to include parietal cortex as well). That theory was first proposed earlier this year in Cole, Laurent, & Stocco (2013).

Also, as described in our online supplemental FAQ for the paper, we identified ‘adaptive task control’ flexible hubs, but there may be other kinds of flexible hubs in the brain. For instance, there may be flexible hubs for stable task control (maintaining task information via connectivity patterns over extended periods of time, only updating when necessary).

See our online supplemental FAQ for more important points that were not mentioned in the journal article. Additional information is also available from a press release from Washington University.

–MWCole

Having more global brain connectivity with some regions enhances intelligence

Friday, July 6th, 2012

A new study – titled “Global Connectivity of Prefrontal Cortex Predicts Cognitive Control and Intelligence” – was published just last week. In it, my co-authors and I describe our research showing that connectivity with a particular part of the prefrontal cortex can predict how intelligent someone is.

We measured intelligence using “fluid intelligence” tests, which measure your ability to solve novel visual puzzles. It turns out that scores on these tests correlate with important life outcomes like academic and job success. So, finding a neuroscientific factor underlying fluid intelligence might have some fairly important implications.

It turns out that it’s relatively unclear exactly what fluid intelligence tests actually test (what helps you solve novel puzzles, exactly?), so we also measured a more basic “cognitive control” ability thought to be related to fluid intelligence – working memory. This measures your ability to maintain and manipulate information in mind in a goal-directed manner.

Overall (i.e., global) brain connectivity with a part of left lateral prefrontal cortex (see figure above) could predict both fluid intelligence and cognitive control abilities.

What does this mean? One possibility is that this prefrontal region is a “flexible hub” that uses its extensive brain-wide connectivity to monitor and influence other brain regions in a goal-directed manner. This may sound a bit like it’s some kind of “homunculus” (little man) that single-handedly implements all brain functions, but in fact we’re suggesting it’s more like a feedback control system that is used often in engineering, that it only helps implement cognitive control (which supports fluid intelligence), and that it doesn’t do this alone.

Indeed, we found other independent factors that were important for predicting intelligence, suggesting there are several fundamental neural factors underlying intelligence. The global connectivity of this prefrontal region could account for 10% of the variability in fluid intelligence, while activity in this region accounts (independently) for 5% of the variability, and overall gray matter volume accounts (again independently) for an additional 6.7% of the variance. Together, these three factors accounted for 26% of the variance in fluid intelligence across individuals.

There are several important questions that this study raises. For instance, does this region change its connectivity depending on the task being performed, as the “flexible hub” hypothesis would suggest? Are there other regions whose global (or local) connectivity contributes substantially to intelligence and cognitive control abilities? Finally, what other factors are there in the brain that might be able to predict fluid intelligence across individuals?

-MC

The evolutionary importance of rapid instructed task learning (RITL)

Sunday, January 23rd, 2011

We are rarely alone when learning something for the first time. We are social creatures, and whether it’s a new technology or an ancient tradition, we typically benefit from instruction when learning new tasks. This form of learning–in which a task is rapidly (within seconds) learned from instruction–can be referred to as rapid instructed task learning (RITL; pronounced “rittle”). Despite the fundamental role this kind of learning plays in our lives, it has been largely ignored by researchers until recently.

My Ph.D. dissertation investigated the evolutionary and neuroscientific basis of RITL.

RITL almost certainly played a tremendous role in shaping human evolution. The selective advantages of RITL for our species are clear: having RITL abilities allows us to partake in a giant web of knowledge shared with anyone willing to instruct us. We might have received instructions to avoid a dangerous animal we have never seen before (e.g., a large cat with a big mane), or instructions on how to make a spear and kill a lion with it. The possible scenarios in which RITL would have helped increase our chances of survival are virtually endless.

There are two basic forms of RITL. (more…)

Finding the most important brain regions

Tuesday, June 29th, 2010

When you type a search into Google it figures out the most important websites based in part on how many links each has from other websites. Taking up precious website space with a link is costly, making each additional link to a page a good indicator of importance.

We thought the same logic might apply to brain regions. Making a new brain connection (and keeping it) is metabolically and developmentally costly, suggesting that regions with many connections must be providing important enough functions to make those connections worth the sacrifice.

We developed two new metrics for quantifying the most connected—and therefore likely the most important—brain regions in a recently published study (Cole et al. (2010). Identifying the brain’s most globally connected regions, NeuroImage 49(4): 3132-3148).

We found that two large-scale brain networks were among the top 5% of globally connected regions using both metrics (see figure above). The cognitive control network (CCN) is involved in attention, working memory, decision-making and other important high-level cognitive processes (see Cole & Schneider, 2007). In contrast, the default-mode network (DMN) is typically anti-correlated with the CCN and is involved in mind-wandering, long-term memory retrieval, and self-reflection.

Needless to say, these networks have highly important roles! Without them we would have no sense of self-control (via the CCN) or even a sense of self to begin with (via the DMN).

However, there are other important functions (such as arousal, sleep regulation, breathing, etc.) that are not reflected here, most of which involve subcortical regions. These regions are known to project widely throughout the brain, so why aren’t they showing up?

It turns out that these subcortical regions only show up for one of the two metrics we used. This metric—unlike the other one—includes low-strength connections. Subcortical regions tend to be small and project weak connections all over the brain, such that only the metric including weak connections could identify them up.

I recently found out that this article received the 2010 NeuroImage Editor’s Choice Award (Methods section). I was somewhat surprised by this, since I thought there wasn’t much interest in the study. When I looked up the most popular articles at NeuroImage, however, I found out it was the 7th most downloaded article from January to May 2010. Hopefully this interest will lead to some innovative follow-ups to try to understand what makes these brain regions so special!

-MWCole

The Cognitive Control Network

Sunday, October 7th, 2007

The Cognitive Control NetworkI recently published my first primary-author research study (Cole & Schneider, 2007).

The study used functional MRI to discover a network of brain regions responsible for conscious will (i.e., cognitive control). It also revealed the network’s specialized parts, which each uniquely contribute to creating the emergent property of conscious will.

I believe this research contributes substantially to our understanding of how we control our own thoughts and actions based on current goals. Much remains a mystery, but this study clearly shows the existence of a functionally integrated yet specialized network for cognitive control.

What is cognitive control? It is the set of brain processes necessary for goal-directed thought and action. Remembering a phone number before dialing requires cognitive control. Also, anything outside routine requires cognitive control (because it’s novel and/or conflicting with what you normally do). This includes, among other things, voluntarily shifting attention and making decisions.

What brain regions are involved? A mountain of evidence is accumulating that a common set of brain regions are involved in cognitive control. We looked for these regions specifically, and verified that they were active during our experiment [see top figure]. The brain regions are spread across the cortex, from the front to the back to either side. However, it’s not the whole brain: there are distinct parts that are involved in cognitive control and not other behavioral demands. (more…)

History’s Top Brain Computation Insights: Day 24

Wednesday, April 25th, 2007

Cognitive control network (Cole & Schneider, 2007)24) Cognitive control processes are distributed within a network of distinct regions (Goldman-Rakic – 1988, Posner – 1990, Wager & Smith 2004, Cole & Schneider – 2007)

Researchers investigating eye movements and attention recorded from different parts of the primate brain and found several regions showing very similar neural activity. Goldman-Rakic proposed the existence of a specialized network for the control of attention.

This cortical system consists of the lateral frontal cortex (fronto-polar, dorsolateral, frontal eye fields), medial frontal cortex (anterior cingulate, pre-SMA, supplementary eye fields), and posterior parietal cortex. Subcortically, dorsomedial thalamus and superior colliculus are involved, among others.

Many computational modelers emphasize the emergence of attention from the local organization of sensory cortex (e.g., local competition). However, when a shift in attention is task-driven (i.e., top-down) then it appears that a specialized system for attentional control drives activity in sensory cortex. Many properties of attention likely arise from the organization of sensory cortex, but empirical data indicate that this is not sufficient.

With the advent of neuroimaging in humans (PET and fMRI), Posner et al. found very similar regions as those reported by Goldman-Rakic. He found that some regions are related more to orienting to stimuli, while others are related more to cognitive control (i.e., controlled processing).

After many fMRI studies of cognitive control were published, Wager et al. performed a meta-analysis looking at most of this research. They found a set of cortical regions active in nearly all cognitive control tasks.

My own work with Schneider (in press) indicates that these regions form an innate network, which is better connected than the rest of cortex on average. We used resting state correlations of fMRI BOLD activity to determine this. This cognitive control network is involved in controlled processing in that it has greater activity early in practice relative to late in practice, and has greater activity for conflicting responses (e.g., the Stroop task).

Though these regions have similar responses, they are not redundant. Our study showed that lateral prefrontal cortex is involved in maintaining relevant task information, while medial prefrontal cortex is involved in preparing and making response decisions. In most cases these two cognitive demands are invoked at the same time; only by separating them in time were we able to show specialization within the cognitive control network. We expect that other regional specializations will be found with more work.

I'll be covering my latest study in more detail once it is published (it has been accepted for publication at NeuroImage and should be published soon). The above figure is from that publication. It lists the six regions within the human cognitive control network. These regions include dorsolateral prefrontal cortex (DLPFC), inferior frontal junction (IFJ), dorsal pre-motor cortex (dPMC), anterior cingulate / pre-supplementary motor area (ACC/pSMA), anterior insula cortex (AIC), and posterior parietal cortex (PPC).

A general computational insight arising from this work (starting with Goldman-Rakic) is that cortex is composed of specialized regions that form specialized networks. This new paradigm for viewing brain function weds the old warring concepts of localized specialization and distributed function.

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

Neural Network “Learning Rules”

Thursday, March 15th, 2007

picture-2.png

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I'll introduce some notions of how neural networks can learn.  Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability.

Let's begin with a theoretical question that is of general interest to cognition: how can a neural system learn sequences, such as the actions required to reach a goal? 

Consider a neuromodeler who hypothesizes that a particular kind of neural network can learn sequences. He might start his modeling study by "training" the network on a sequence. To do this, he stimulates (activates) some of its neurons in a particular order, representing objects on the way to the goal. 

After the network has been trained through multiple exposures to the sequence, the modeler can then test his hypothesis by stimulating only the neurons from the beginning of the sequence and observing whether the neurons in the rest sequence activate in order to finish the sequence.

Successful learning in any neural network is dependent on how the connections between the neurons are allowed to change in response to activity. The manner of change is what the majority of researchers call "a learning rule".  However, we will call it a "synaptic modification rule" because although the network learned the sequence, it is not clear that the *connections* between the neurons in the network "learned" anything in particular.

The particular synaptic modification rule selected is an important ingredient in neuromodeling because it may constrain the kinds of information the neural network can learn.

There are many categories of mathematical synaptic modification rule which are used to describe how synaptic strengths should be changed in a neural network.  Some of these categories include: backpropgration of error, correlative Hebbian, and temporally-asymmetric Hebbian.

  • Backpropogation of error states that connection strengths should change throughout the entire network in order to minimize the difference between the actual activity and the "desired" activity at the "output" layer of the network.
  • Correlative Hebbian states that any two interconnected neurons that are active at the same time should strengthen their connections, so that if one of the neurons is activated again in the future the other is more likely to become activated too.
  • Temporally-asymmetric Hebbian is described in more detail in the example below, but essentially emphasizes the importants of causality: if a neuron realiably fires before another, its connection to the other neuron should be strengthened. Otherwise, it should be weakened. 

Why are there so many different rules?  Some synaptic modification rules are selected because they are mathematically convenient.  Others are selected because they are close to currently known biological reality.  Most of the informative neuromodeling is somewhere in between.

An Example

Let's look at a example of a learning rule used in a neural network model that I have worked with: imagine you have a network of interconnected neurons that can either be active or inactive.    If a neuron is active, its value is 1, otherwise its value is 0. (The use of 1 and 0 to represent simulated neuronal activity is only one of the many ways to do so; this approach goes by the name "McCulloch-Pitts").

(more…)

The neural basis of preparation for willful action

Wednesday, February 7th, 2007

My latest scientific publication is entitled Selection and maintenance of stimulus–response rules during preparation and performance of a spatial choice-reaction task (authors: Schumacher, Cole, and D'Esposito). It is a study using functional MRI with humans to investigate how we prepare for and execute willful action.

In this post I'll attempt to translate the article's findings for both the layperson and the uninitiated scientist.

What is willful action?

Willful action is a set of processes in direct contrast to automatic, habitual, processes. We can say with certainty that you are using willful action minimally when you are resting, brushing your teeth, driving to work for the thousandth time, or while performing any task that is effortless.

Willful action is necessary during two types of situations.

First, when you are attempting to do something for the first time (i.e., when you've had little practice at it) these processes are necessary for accurate performance. Think of the immense amount of effort while learning to drive. At first willful action was necessary, but later this need subsided.

Second, when two potential actions conflict in the brain willful action is necessary to overcome the incorrect action. The conflicting action may originate in an inappropriate desire, a habitual action that is no longer appropriate, or a natural tendency to perform one action rather than another.

In this latest publication we have used this last case (conflict due to a natural tendency to respond in a certain way) to investigate willful action.
(more…)