Archive for the ‘General Neuroscience’ Category

The brain’s network switching stations for adaptive behavior

Friday, August 16th, 2013

I’m excited to announce that my latest scientific publication – “Multi-task connectivity reveals flexible hubs for adaptive task control” – was just published in Nature Neuroscience. The paper reports on a project I (along with my co-authors) have been working on for over a year. The goal was to use network science to better understand how human intelligence happens in the brain – specifically, our ability to rapidly adapt to new circumstances, as when learning to perform a task for the first time (e.g., how to use new technology).

The project built on our previous finding (from last year) showing that the amount of connectivity of a well-connected “hub” brain region in prefrontal cortex is linked to human intelligence. That study suggested (indirectly) that there may be hub regions that are flexible – capable of dynamically updating what brain regions they communicate with depending on the current goal.

Typical methods were not capable of more directly testing this hypothesis, however, so we took the latest functional connectivity approaches and pushed the limit, going well beyond the previous paper and what others have done in this area. The key innovation was to look at how functional connectivity changes across dozens of distinct task states (specifically, 64 tasks per participant). This allowed us to look for flexible hubs in the fronto-parietal brain network.

We found that this network contained regions that updated their global pattern of functional connectivity (i.e., inter-regional correlations) depending on which task was being performed.

In other words, the fronto-parietal network changed its brain-wide functional connectivity more than any other major brain network, and this updating appeared to code which task was being performed.

What’s the significance?

These results suggest a potential mechanism for adaptive cognitive abilities in humans:
Prefrontal and parietal cortices form a network with extensive connections projecting to other functionally specialized networks throughout the brain. Incoming instructions activate component representations – coded as neuronal ensembles with unique connectivity patterns – that produce a unique global connectivity pattern throughout the brain. Since these component representations are interchangeable it’s possible to implement combinations of instructions never seen before, allowing for rapid learning of new tasks from instructions.

Important points not mentioned or not emphasized in the journal article:

This study was highly hypothesis-driven, as it tested some predictions of our recent compositional theory of prefrontal cortex function (extended to include parietal cortex as well). That theory was first proposed earlier this year in Cole, Laurent, & Stocco (2013).

Also, as described in our online supplemental FAQ for the paper, we identified ‘adaptive task control’ flexible hubs, but there may be other kinds of flexible hubs in the brain. For instance, there may be flexible hubs for stable task control (maintaining task information via connectivity patterns over extended periods of time, only updating when necessary).

See our online supplemental FAQ for more important points that were not mentioned in the journal article. Additional information is also available from a press release from Washington University.

–MWCole

Having more global brain connectivity with some regions enhances intelligence

Friday, July 6th, 2012

A new study – titled “Global Connectivity of Prefrontal Cortex Predicts Cognitive Control and Intelligence” – was published just last week. In it, my co-authors and I describe our research showing that connectivity with a particular part of the prefrontal cortex can predict how intelligent someone is.

We measured intelligence using “fluid intelligence” tests, which measure your ability to solve novel visual puzzles. It turns out that scores on these tests correlate with important life outcomes like academic and job success. So, finding a neuroscientific factor underlying fluid intelligence might have some fairly important implications.

It turns out that it’s relatively unclear exactly what fluid intelligence tests actually test (what helps you solve novel puzzles, exactly?), so we also measured a more basic “cognitive control” ability thought to be related to fluid intelligence – working memory. This measures your ability to maintain and manipulate information in mind in a goal-directed manner.

Overall (i.e., global) brain connectivity with a part of left lateral prefrontal cortex (see figure above) could predict both fluid intelligence and cognitive control abilities.

What does this mean? One possibility is that this prefrontal region is a “flexible hub” that uses its extensive brain-wide connectivity to monitor and influence other brain regions in a goal-directed manner. This may sound a bit like it’s some kind of “homunculus” (little man) that single-handedly implements all brain functions, but in fact we’re suggesting it’s more like a feedback control system that is used often in engineering, that it only helps implement cognitive control (which supports fluid intelligence), and that it doesn’t do this alone.

Indeed, we found other independent factors that were important for predicting intelligence, suggesting there are several fundamental neural factors underlying intelligence. The global connectivity of this prefrontal region could account for 10% of the variability in fluid intelligence, while activity in this region accounts (independently) for 5% of the variability, and overall gray matter volume accounts (again independently) for an additional 6.7% of the variance. Together, these three factors accounted for 26% of the variance in fluid intelligence across individuals.

There are several important questions that this study raises. For instance, does this region change its connectivity depending on the task being performed, as the “flexible hub” hypothesis would suggest? Are there other regions whose global (or local) connectivity contributes substantially to intelligence and cognitive control abilities? Finally, what other factors are there in the brain that might be able to predict fluid intelligence across individuals?

-MC

The evolutionary importance of rapid instructed task learning (RITL)

Sunday, January 23rd, 2011

We are rarely alone when learning something for the first time. We are social creatures, and whether it’s a new technology or an ancient tradition, we typically benefit from instruction when learning new tasks. This form of learning–in which a task is rapidly (within seconds) learned from instruction–can be referred to as rapid instructed task learning (RITL; pronounced “rittle”). Despite the fundamental role this kind of learning plays in our lives, it has been largely ignored by researchers until recently.

My Ph.D. dissertation investigated the evolutionary and neuroscientific basis of RITL.

RITL almost certainly played a tremendous role in shaping human evolution. The selective advantages of RITL for our species are clear: having RITL abilities allows us to partake in a giant web of knowledge shared with anyone willing to instruct us. We might have received instructions to avoid a dangerous animal we have never seen before (e.g., a large cat with a big mane), or instructions on how to make a spear and kill a lion with it. The possible scenarios in which RITL would have helped increase our chances of survival are virtually endless.

There are two basic forms of RITL. (more…)

Finding the most important brain regions

Tuesday, June 29th, 2010

When you type a search into Google it figures out the most important websites based in part on how many links each has from other websites. Taking up precious website space with a link is costly, making each additional link to a page a good indicator of importance.

We thought the same logic might apply to brain regions. Making a new brain connection (and keeping it) is metabolically and developmentally costly, suggesting that regions with many connections must be providing important enough functions to make those connections worth the sacrifice.

We developed two new metrics for quantifying the most connected—and therefore likely the most important—brain regions in a recently published study (Cole et al. (2010). Identifying the brain’s most globally connected regions, NeuroImage 49(4): 3132-3148).

We found that two large-scale brain networks were among the top 5% of globally connected regions using both metrics (see figure above). The cognitive control network (CCN) is involved in attention, working memory, decision-making and other important high-level cognitive processes (see Cole & Schneider, 2007). In contrast, the default-mode network (DMN) is typically anti-correlated with the CCN and is involved in mind-wandering, long-term memory retrieval, and self-reflection.

Needless to say, these networks have highly important roles! Without them we would have no sense of self-control (via the CCN) or even a sense of self to begin with (via the DMN).

However, there are other important functions (such as arousal, sleep regulation, breathing, etc.) that are not reflected here, most of which involve subcortical regions. These regions are known to project widely throughout the brain, so why aren’t they showing up?

It turns out that these subcortical regions only show up for one of the two metrics we used. This metric—unlike the other one—includes low-strength connections. Subcortical regions tend to be small and project weak connections all over the brain, such that only the metric including weak connections could identify them up.

I recently found out that this article received the 2010 NeuroImage Editor’s Choice Award (Methods section). I was somewhat surprised by this, since I thought there wasn’t much interest in the study. When I looked up the most popular articles at NeuroImage, however, I found out it was the 7th most downloaded article from January to May 2010. Hopefully this interest will lead to some innovative follow-ups to try to understand what makes these brain regions so special!

-MWCole

Cingulate Cortex and the Evolution of Human Uniqueness

Thursday, November 12th, 2009

Figuring out how the brain decides between two options is difficult. This is especially true for the human brain, whose activity is typically accessible only via the small and occasionally distorted window provided by new imaging technologies (such as functional MRI (fMRI)).

In contrast, it is typically more accurate to observe monkey brains since the skull can be opened and brain activity recorded directly.

Despite this, if you were to look just at the human research, you would consider it a fact that the anterior cingulate cortex (ACC) increases its activity during response conflict. The thought is that this brain region detects that you are having trouble making decisions, and signals other brain regions to pay more attention.

If you were to only look at research with monkeys, however, you would think otherwise. No research with macaque monkeys (the ‘non-human primate’ typically used in neuroscience research) has found conflict activity in ACC.

My most recent publication looks at two possible explanations for this discrepancy: 1) Differences in methods used to study these two species, and 2) Fundamental evolutionary differences between the species.

(more…)

A Meta-Meta-Analysis of Brain Functions

Friday, October 17th, 2008

Thousands of brain imaging studies are published each year. A subset of these studies are replications, or slight variations, of previous studies. Attempting to come to a solid conclusion based on the complex brain activity patterns reported by all these replications can be daunting. Meta-analysis is one tool that has been used to make sense of it all.

Meta-analyses take locations of brain activity in published scientific papers and pool them together to see if there is any consistency.

This is typically done using a standardized brain that all the studies fit their data to (e.g., Talairach). Activation coordinates are then placed on a template brain as dots. When dots tend to clump together then the author can claim some consistency is present across studies. See the first figure for an example of this kind of result.

More sophisticated ways of doing this have emerged, however. One of these advanced methods is called activation likelihood estimation (ALE). This method was developed by Peter Terkeltaub et al. (in conjunction with Jason Chein and Julie Fiez) in 2002 and extended by Laird et al. in 2005.

ALE computes the probability of each part of the brain being active across studies. This is much more powerful than simple point-plotting because it takes much of the guess-work out of deciding if a result is consistent across studies or not.

(more…)

Grand Challenges of Neuroscience: Day 6

Monday, July 21st, 2008

Topic 6: Causal Understanding


Causal understanding is an important part of human cognition.  How do we understand that a particular event or force has caused another event?  How do realize that inserting coins into a soda machine results in a cool beverage appearing below?  And ultimately, how do we understand people’s reactions to events?

The NSF workshop panel on the Grand Challenges of Mind and Brain highlighted the question of ‘causal understanding’ as their 6th research topic.   (This was the final topic in their report.)

In addition to studying causal understanding, it is probably just as important to study causal misunderstanding: that is, why do individuals infer the wrong causes for events?  Or incorrect results from causes? Studying the errors we make in causal inference and understanding may help us discover the underlying neural mechanisms.  

It probably isn’t too difficult to imagine that progress on causal understanding, and improvements in our ability to be correct about causation, will probably be important for the well-being of humanity.  But what kinds of experiments and methods could be used to human brain mechanisms of  causal understanding?

(more…)

Joaquin Fuster on Cortical Dynamics

Saturday, April 5th, 2008

I recently watched this talk (below) by Joaquin Fuster. His theories provide a good integration of cortical functions and distributed processing in working and long-term memory. He also has some cool videos of likely network interactions across cortex (in real time) in his talk.

Here is a diagram of Dr. Fuster’s view of cortical hierarchies:

Joaquin Fuster’s talk:

Link to Joaquin Fuster’s talk [Google Video]

Here is an excerpt from Dr. Fuster’s amazing biography:
(more…)

Combining Simple Recurrent Networks and Eye-Movements to study Language Processing

Saturday, April 5th, 2008

BBS image of GLENMORE model

Modern technologies allow eye movements to be used as a tool for studying language processing during tasks such as natural reading. Saccadic eye movements during reading turn out to be highly sensitive to a number of linguistic variables. A number of computational models of eye movement control have been developed to explain how these variables affect eye movements. Although these models have focused on relatively low-level cognitive, perceptual and motor variables, there has been a concerted effort in the past few years (spurred by psycholinguists) to extend these computational models to syntactic processing.

During a modeling symposium at ECEM2007 (the 14th European Conference on Eye Movements), Dr. Ronan Reilly presented a first attempt to take syntax into account in his eye-movement control model (GLENMORE; Reilly & Radach, Cognitive Systems Research, 2006). (more…)

Magnetoencephalography

Monday, August 20th, 2007

MEG sensors In the dark confines behind our eyes lies flesh full of mysterious patterns, constituting our hopes, desires, knowledge, and everything else fundamental to who we are. Since at least the time of Hippocrates we have wondered about the nature of this flesh and its functions. Finally, after thousands of years of wondering we are now able to observe the mysterious patterns of the living brain, with the help of neuroimaging.

First, electroencephalography (EEG) showed us that these brain patterns have some relation in time to our behaviors. EEG showed us when things happen in the brain. More recent technologies such as functional magnetic resonance imaging (fMRI) then showed us where things happen in the brain.

It has been suggested that true insights into these brain patterns will arise when we can understand the patterns’ complex spatio-temporal nature. Thus, only with sufficient spatial and temporal resolution will we be able to decipher the mechanisms behind the brain patterns, and as a result the mechanisms behind ourselves.

Magnetoencephalography (MEG) may help to provide such insight. This method uses superconducting sensors to detect subtle changes in the magnetic fields surrounding the head. These changes reflect the patterns of neural activity as they occur in the brain. Unlike fMRI (and similar methods), MEG can measure neural activity at a very high temporal resolution (>1 kHz). In this respect it is similar to EEG. However, unlike EEG, MEG patterns are not distorted by the skull and scalp, thus providing an unprecedented level of spatio-temporal resolution for observing the neural activity underlying our selves.

Despite being around for several decades, new advances in the technology are providing unprecedented abilities to observe brain activity. Of course, the method is not perfect by any means. As always, it is a method complimentary to others, and should be used in conjunction with other noninvasive (and the occasionally invasive, where appropriate) neuroimaging methods.

MEG relies on something called a superconducting quantum interference device (SQUID). Many of these SQUIDs are built into a helmet, which is cooled with liquid helium and placed around the head. Extremely small magnetic fields created by neural activity can then be detected with these SQUIDs and recorded to a computer for later analysis.

I recently got back from a trip to Finland, where I learned  a great deal about MEG. I’m planning to use the method to observe the flow of information among brain regions during cognitive control tasks involving decision making, learning, and memory. I’m sure news of my work in this area will eventually make it onto this website.

-MC

Two Universes, Same Structure

Tuesday, June 5th, 2007

Image of galaxiesThis image is not of a neuron.

This image is of the other universe; the one outside our heads.

It depicts the “evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years”, as computed by the Millennium Simulation. (Click the image above for a better view.)

The next image, of a neuron, is included for comparison.

Image of a neuron

It is tempting to wax philosophical on this structure equivalence. How is it that both the external and internal universes can have such similar structure, and at such vastly different physical scales?

If we choose to go philosophical, we may as well ponder something even more fundamental: Why is it that all complex systems seem to have a similar underlying network-like structure?

(more…)

Grand Challenges of Neuroscience: Day 2

Wednesday, May 2nd, 2007

swarm-thumb.jpgTopic 2: Conflict and Cooperation

Generally, cognitive neuroscience aims to explain how mental processes such as believing, knowing, and inferring arise in the brain and affect behavior.  Two behaviors that have important effects on the survival of humans are cooperation and conflict. 

According to the NSF committee convened last year, conflict and cooperation is an important focus area for future cognitive neuroscience work.  Although research in this area has typically been the domain of psychologists, it seems that the time is ripe to apply findings from neuroscience to ground psychological theories in the underlying biology.

Neuroscience has produced a large amount of information about the brain regions that are relevant to social interactions.  For example, the amygdala has been shown to be involved in strong emotional responses.  The "mirror" neuron system in the frontal lobe allows us to put ourselves in someone else's shoes by allowing us to understand their actions as though they were our own.  Finally, the superior temporal gyrus and orbitofrontal cortex, normally involved in language and reward respectively, have also been shown to be involved in social behaviors.

Experiments?

The committee has left it up to us to come up with a way to study these phenomena! How can we study conflict and cooperation from cognitive neuroscience perspective?

At least two general approaches come to mind. The first is fMRI studies in which social interactions are simulated (or carried out remotely) over a computer link to the experiment participant.  A range of studies of this sort have recently begun to appear investigating trust and decision-making in social contexts.

The second general approach that comes to mind is that of  using neurocomputational simulations of simple acting organisms with common or differing goals.  Over the past few years, researchers have been carrying out studies with multiple interacting "agents" that "learn" through the method of Reinforcement Learning. 

Reinforcement Learning is an artificial intelligence algorithm which allows "agents" to develop behaviors through trial-and-error in an attempt to meet some goal which provides reward in the form of positive numbers.  Each agent is defined as a small program with state (e.g., location, sensory input) and a memory or "value function" which can keep track  of how much numerical reward it expects to obtain by choosing a possible action.

Although normally thought to be of interest only to computer scientists, Reinforcement Learning has recently attracted the attention of cognitive neuroscientists because of emerging evidence that something like it might be used in the brain.

By providing these agents with a goal that can only be achieved through some measure of coorperation or under some pressure, issues of conflict and coorperation can by studied in a perfectly controlled computer simulation environment.

-PL 

Grand Challenges of Neuroscience: Day 1

Monday, April 30th, 2007

Following up on MC's posts about the significant insights in the history of neuroscience, I'll now take Neurevolution for a short jaunt into neuroscience's potential future.

In light of recent advances in technologies and methodologies applicable to neuroscience research, the National Science Foundation last summer released a document on the "Grand Challenges of Neuroscience".  These grand challenges were identified by a committee of leading members of the cognitive neuroscience community.

The document, available at http://www.nsf.gov/sbe/grand_chall.pdf, describes six domains of research the committee deemed to be important for progress in understanding the relationship between mind and brain.

Over the next few posts, I will discuss each of the research domains and explain in layperson's terms why these questions are interesting and worth pursuing.  I'll also describe potential experimental approaches to address these questions in a cognitive neuroscience framework.

Topic 1:  "Adaptive Plasticity"

One research topic brought up by the committee was that of adaptive plasticity.  In this context, plasticity refers to the idea that the connections in the brain, and the behavior governed by the brain, can be changed through experience and learning.  

Learning allows us to adapt to new circumstances and environments.  Arguably, understanding how we learn and how to improve learning could be one of the greatest contributions of neuroscience.

Although it is widely believed that memory is based on the synaptic changes that occur during long-term potentiation and long-term depression (see our earlier post) this has not been conclusively shown!

What has been shown is that drugs that prevent synaptic changes also prevent learning.  However, that finding only demonstrates a correlation between synaptic change and memory formation,  not causation. (For example, it is possible that those drugs are interfering with some other process that truly underlies memory.)

The overarching question the committee raises is: What are the rules and principles of neural plasticity that implement [the] diverse forms of memory?

This question aims to quantify the exact relationships between changes at the neuronal level and at the level of behavior.  For instance, do rapid changes at the synapse reflect rapid learning?  And, how do the physical limitations on the changes at the neuronal level relate to cognitive limitations at the behavioral level?

Experiments?
My personal opinion is that the answers to these questions will be obtained through new experiments that either implant new memories or alter existing ones (e.g., through electrical stimulation protocols). 

There is every indication that experimenters will soon be able to select and stimulate particular cells in an awake, behaving animal to alter the strength of the connection between those cells.  The experimenters can then test the behavior of the animals to see if their memory for the association that might be represented by that connection has been altered.

-PL 

History’s Top Insights Into Brain Computation

Sunday, April 29th, 2007

This post is the culmination of a month-long chronicling of the major brain computation insights of all time.

Some important insights were certainly left out, so feel free to add comments with your favorites.

Below you will find all 26 insights listed with links to their entries. At the end is the summary of the insights in two (lengthy) sentences.

1) The brain computes the mind (Hippocrates- 460-379 B.C.)

2)  Brain signals are electrical (Galvani – 1791, Rolando – 1809)

3)  Functions are distributed in the brain (Flourens – 1824, Lashley – 1929)

4) Functions can be localized in the brain (Bouillaud – 1825, Broca – 1861, Fritsch & Hitzig – 1870)

5) Neurons are fundamental units of brain computation (Ramon y Cajal – 1889)

6) Neural networks consist of excitatory and inhibitory neurons connected by synapses (Sherrington – 1906)

7) Brain signals are chemical (Dale – 1914, Loewi – 1921)

8) Reward-based reinforcement learning can explain much of behavior (Skinner – 1938, Thorndike – 1911, Pavlov – 1905)

9) Convergence and divergence between layers of neural units can perform abstract computations (Pitts & McCulloch – 1947)

10) The Hebbian learning rule: 'Neurons that fire together wire together' [plus corollaries] (Hebb, 1949)

11) Action potentials, the electrical events underlying brain communication, are governed by ion concentrations and voltage differences mediated by ion channels (Hodgkin & Huxley – 1952)

12) Hippocampus is necessary for episodic memory formation (Milner – 1953)

13) Larger cortical space is correlated with greater representational resolution; memories are stored in cortex (Penfield – 1957)

14) Neocortex is composed of columnar functional units (Mountcastle – 1957, Hubel & Wiesel – 1962)

15) Consciousness depends on cortical communication; the cortical hemispheres are functionally specialized (Sperry & Gazzaniga – 1969)

16) Critical periods of cortical development via competition (Hubel & Wiesel – 1970)

17) Reverbatory activity in lateral prefrontal cortex maintains memories and attention over short periods (Fuster – 1971, Jacobsen – 1936, Goldman-Rakic – 2000)

18) Behavior exists on a continuum between controlled and automatic processing (Schneider & Shiffrin – 1977)

19) Neural networks can self-organize via competition (Grossberg – 1978, Kohonen – 1981)

20) Spike-timing dependent plasticity: Getting the brain from correlation to causation (Levy – 1983, Sakmann – 1994, Bi & Poo – 1998, Dan – 2002)

21) Parallel and distributed processing across many neuron-like units can lead to complex behaviors (Rumelhart & McClelland – 1986, O'Reilly – 1996)

22) Recurrent connectivity in neural networks can elicit learning and reproduction of temporal sequences (Jordan – 1986, Elman – 1990, Schneider – 1991)

23) Motor cortex is organized by movement direction (Schwartz  & Georgopoulos – 1986, Schwartz – 2001)

24) Cognitive control processes are distributed within a network of distinct regions (Goldman-Rakic – 1988, Posner – 1990, Wager & Smith – 2004, Dosenbach et al. – 2006, Cole & Schneider – 2007)

25) The dopamine system implements a reward prediction error algorithm (Schultz – 1996, Sutton – 1988)

26) Some complex object categories, such as faces, have dedicated areas of cortex for processing them, but are also represented in a distributed fashion (Kanwisher – 1997, Haxby – 2001)

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized and/or overlapping distributed networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, dopamine signals for reinforcement learning, and recurrent connectivity for sequential learning.

-MC 

History’s Top Brain Computation Insights: Day 11

Thursday, April 12th, 2007

Neuron showing sodium and potasium concentration changes11) Action potentials, the electrical events underlying brain communication, are governed by ion concentrations and voltage differences mediated by ion channels (Hodgkin & Huxley – 1952)

Hodgkin & Huxley developed the voltage clamp, which allows ion concentrations in a neuron to be measured with the voltage constant. Using this device, they demonstrated changes in ion permeability at different voltages. Their mathematical model of neuron function, based on the squid giant axon, postulated the existence of ion channels governing the action potential (the basic electrical signal of neurons). Their model has been verified, and is amazingly consistent across brain areas and species.

You can explore the Hodgkin & Huxley model by downloading Dave Touretsky's HHSim, a computational model implementing the Hodgkin & Huxley equations.

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC