Frontoparietal cortex: The immune system of the mind

The frontoparietal control system is to the mind what the immune system is to the body. It may oversimplify the situation, but we’re finding it’s a useful metaphor nonetheless. Indeed, we’ve just published a new theory paper explaining that there is already an avalanche of evidence supporting this metaphor. Even though much work is left …

The evolutionary importance of rapid instructed task learning (RITL)

We are rarely alone when learning something for the first time. We are social creatures, and whether it’s a new technology or an ancient tradition, we typically benefit from instruction when learning new tasks. This form of learning–in which a task is rapidly (within seconds) learned from instruction–can be referred to as rapid instructed task …

Grand Challenges of Neuroscience: Day 6

Topic 6: Causal Understanding Causal understanding is an important part of human cognition.  How do we understand that a particular event or force has caused another event?  How do realize that inserting coins into a soda machine results in a cool beverage appearing below?  And ultimately, how do we understand people’s reactions to events? The …

A Brief Introduction to Reinforcement Learning

Computational models that are implemented, i.e., written out as equations or software, are an increasingly important tool for the cognitive neuroscientist.  This is because implemented models are, effectively, hypotheses that have been worked out to the point where they make quantitative predictions about behavior and/or neural activity. In earlier posts, we outlined two computational models …

Two Universes, Same Structure

This image is not of a neuron. This image is of the other universe; the one outside our heads. It depicts the “evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years”, as computed by the Millennium Simulation. (Click the image above for a better view.) The next image, …

Grand Challenges of Neuroscience: Day 3

Topic 3: Spatial Knowledge Animal studies have shown that the hippocampus contains special cells called “place cells”.  These place cells are interesting because their activity seems to indicate not what the animal sees, but rather where the animal is in space as it runs around in a box or in a maze. (See the four …

History’s Top Brain Computation Insights: Day 23

23) Motor cortex is organized by movement direction (Schwartz  & Georgopoulos – 1986, Schwartz – 2001) Penfield had shown that motor cortex is organized in a somatotopic map. However, it was unclear how individual neurons are organized. What does each neuron’s activity represent? The final location of a movement, or the direction of that movement? …

History’s Top Brain Computation Insights: Day 22

22) Recurrent connectivity in neural networks can elicit learning and reproduction of temporal sequences (Jordan – 1986, Elman – 1990, Schneider – 1991) Powerful learning algorithms such as Hebbian learning, self-organizing maps, and backpropagation of error illustrated how categorization and stimulus-response mapping might be learned in the brain. However, it remained unclear how sequences and …

History’s Top Brain Computation Insights: Day 21

21) Parallel and distributed processing across many neuron-like units can lead to complex behaviors (Rumelhart & McClelland – 1986, O'Reilly – 1996) Pitts & McCullochprovided amazing insight into how brain computations take place. However, their two-layer perceptrons were limited. For instance, they could not implement the logic gate XOR (i.e., 'one but not both'). An …

History’s Top Brain Computation Insights: Day 19

19) Neural networks can self-organize via competition (Grossberg – 1978, Kohonen – 1981) Hubel and Wiesel's work with the development  of cortical columns (see previous post) hinted at it, but it wasn't until Grossberg and Kohonen built computational architectures explicitly exploring competition that its importance was made clear. Grossberg was the first to illustrate the …

A Popular but Problematic Learning Rule: “Backpropogration of Error”

Backpropogation of Error (or "backprop") is the most commonly-used neural network training algorithm.  Although fundamentally different from the less common Hebbian-like mechanism mentioned in my last post , it similarly specifies how the weights between the units in a network should be changed in response to various patterns of activity.   Since backprop is so …

Human Versus Non-Human Neuroscience

Most neuroscientists don't use human subjects, and many tend to forget this important point:  All neuroscience with non-human subjects is theoretical. If the brain of a mouse is understood in exquisite detail, it is only relevant (outside veterinary medicine) in so far as it is relevant to human brains. Similarly, if a computational model can …

Neural Network “Learning Rules”

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I'll introduce some notions of how neural networks can learn.  Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability. Let's begin with a …

Predicting Intentions: Implications for Free Will

News about a neuroimaging group's attempts to predict intentions hit the wire a few days ago. The major theme was how mindreading might be used for unethical purposes. What about its more profound implications? If your intentions can be predicted before you've even made a conscious decision, then your will must be determined by brain …

Computational models of cognition in neural systems: WHY?

In my most recent post I gave an overview of the "simple recurrent network" (SRN), but I'd like to take a step back and talk about neuromodeling in general.  In particular I'd like to talk about why neuromodeling is going to be instrumental in bringing about the cognitive revolution in neuroscience. A principal goal of …