Archive for February, 2007

Eliminating Common Misconceptions About fMRI

Friday, February 23rd, 2007

Most researchers in neuroscience use animal models.

Though most neuroscientists are interested in understanding the human brain, they can use more invasive techniques with animal brains. In exchange for these invasive abilities they must assume that other animals are similar enough to humans that they can actually learn something about humans in the process.

Functional magnetic resonance imaging (fMRI) is a non-invasive technique for measuring changes in local blood flow (which are significantly correlated with changes in neural activity) in the brain. fMRI measures what is called the blood oxygen level-dependent (BOLD) signal. Because it is non-invasive it can be used with human subjects.

Researchers like ourselves recognize the value of animal research, especially when the behavior being investigated is similar between the studied species and humans.

However, there is at least one fundamental cognitive difference between humans and all other animals, and likely many more given the dominant position of our species.
For researchers like ourselves it is much more interesting to learn something about the human brain (the item of interest) rather than, say, the rat brain.

Why do some neuroscientists think that using fMRI to study the neural basis of cognition in humans is of little value?

Many have heard that there are issues with fMRI as a technique. There are (like any technique), but not as many as most believe.

Here are some common misconceptions about fMRI:
(more…)

Demystifying the Brain

Thursday, February 15th, 2007

Most neuroscience writing touts statements like 'the human brain is the most complex object in the universe'. This serves only to paint the brain as a mysterious, seemingly unknowable structure.

This is somehow comforting to some, but it's not for me. I want to understand this thing!

Here are some facts to demystify the brain (most points courtesy of Dave Touretzky and Christopher Cherniak):

  • The brain is very complex, but not infinitely so

  • A few spoonfuls of yogurt contain 1011 lactobaccillus bacteria: ten times the number of cortical neurons in a human brain
  • There are roughly 1013 synapses in cortex. Assume each stores one bit of information: that’s 1.25 terabytes.
    • Also, the Library of Congress (80 million volumes, average 300 typed pages each) contains about 48 terabytes of data.

  • Volume of the human brain: about 1.4 liters.
  • Number of neurons in a human brain: 1012. Number of neurons in a rat brain: 1010.
  • Number of neurons in human spinal cord: 109 [source]
  • Average loss of neocortical neurons = 1 per second; 85,000 per day; ~31 million (31×106) per year [source]
  • No neuron fires faster than 1 kHz
  • The total energy consumption of the brain is about 25 watts [source]
  • Average number of glial cells in brain = 10-50 times the number of neurons [source]
  • "For all our neurocomputational sophistication and processing power, we can barely attend to more than one object at a time, and we can hardly perform two tasks at once" [source]

What to take from all this? Simply that the brain is a real, physical object with real, physical limitations. As such, we really do have a chance to understand it.

Of course, if the brain is so limited, how can we expect our brains to be up to the task of understanding it?

We can, and we will. How can I be so sure? Because (as illustrated above) we have already built, and thus understand, computational devices that are rivaling and in some cases surpassing (e.g., in memory capacity) the computational powers of the brain.

This shows that we have what it takes to understand ourselves in the deepest way possible: by learning the neural mechanisms that make us who and what we are.

-MC 

Computational models of cognition in neural systems: WHY?

Monday, February 12th, 2007

cajal1.png
In my most recent post I gave an overview of the "simple recurrent network" (SRN), but I'd like to take a step back and talk about neuromodeling in general.  In particular I'd like to talk about why neuromodeling is going to be instrumental in bringing about the cognitive revolution in neuroscience.

A principal goal of cognitive neuroscience should be to explain how cognitive phenomena arise from the underlying neural systems.  How do the neurons and their connections result in interesting patterns of thought?  Or to take a step up, how might columns, or nuclei, interact to result in problem solving skills, thought or consciousness?

If a cognitive neuroscientist believes they know how a neural system gives rise to a behavior, they should be able to construct a model to demonstrate how this is the case.

That, in brief, is the answer.

But what makes a good model?  I'll partially answer this question below, but in future posts I'll bring up specific examples of models, some good, some poor.

First, any "model" is a simplification of the reality.  If the model is too simple, it won't be interesting.  If it's too realistic, it will be too complex to understand.  Thus, a good model is at that sweet spot where it's as simple as possible but no simpler.
Second, a model whose ingredients spell out the result you're looking for won't be interesting.  Instead, the results should emerge from the combination of the model's resources, constraints and experience.

Third, a model with too many "free" parameters is less likely to be interesting.  So an important requirement is that the "constraints" should be realistic, mimicking the constraints of the real system that is being modeled.

A common question I have gotten is:  "Isn't a model just a way to fit inputs to outputs?  Couldn't it just be replaced with a curve fitter or a regression?"  Well, perhaps the answer should be yes IF you consider a human being to just be a curve fitting device. A human obtains inputs and generates outputs.  So if you wish to say that a model is just a curve fitter, I will say that a human is, too.

What's interesting about neural systems, whether real or simulated, is the emergence of complex function from seemingly "simple" parts.

In future posts, I'll talk more about "constraints" by giving concrete examples.  In the meantime, feel free to bring up any questions you have about the computational modeling of cognition.
-PL 

[Image by Santiago Ramon y Cajal, 1914.] 

The neural basis of preparation for willful action

Wednesday, February 7th, 2007

My latest scientific publication is entitled Selection and maintenance of stimulus–response rules during preparation and performance of a spatial choice-reaction task (authors: Schumacher, Cole, and D'Esposito). It is a study using functional MRI with humans to investigate how we prepare for and execute willful action.

In this post I'll attempt to translate the article's findings for both the layperson and the uninitiated scientist.

What is willful action?

Willful action is a set of processes in direct contrast to automatic, habitual, processes. We can say with certainty that you are using willful action minimally when you are resting, brushing your teeth, driving to work for the thousandth time, or while performing any task that is effortless.

Willful action is necessary during two types of situations.

First, when you are attempting to do something for the first time (i.e., when you've had little practice at it) these processes are necessary for accurate performance. Think of the immense amount of effort while learning to drive. At first willful action was necessary, but later this need subsided.

Second, when two potential actions conflict in the brain willful action is necessary to overcome the incorrect action. The conflicting action may originate in an inappropriate desire, a habitual action that is no longer appropriate, or a natural tendency to perform one action rather than another.

In this latest publication we have used this last case (conflict due to a natural tendency to respond in a certain way) to investigate willful action.
(more…)