History’s Top Brain Computation Insights: Day 11

Neuron showing sodium and potasium concentration changes11) Action potentials, the electrical events underlying brain communication, are governed by ion concentrations and voltage differences mediated by ion channels (Hodgkin & Huxley – 1952)

Hodgkin & Huxley developed the voltage clamp, which allows ion concentrations in a neuron to be measured with the voltage constant. Using this device, they demonstrated changes in ion permeability at different voltages. Their mathematical model of neuron function, based on the squid giant axon, postulated the existence of ion channels governing the action potential (the basic electrical signal of neurons). Their model has been verified, and is amazingly consistent across brain areas and species.

You can explore the Hodgkin & Huxley model by downloading Dave Touretsky's HHSim, a computational model implementing the Hodgkin & Huxley equations.

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

Human Versus Non-Human Neuroscience

Most neuroscientists don't use human subjects, and many tend to forget this important point: 
All neuroscience with non-human subjects is theoretical.

If the brain of a mouse is understood in exquisite detail, it is only relevant (outside veterinary medicine) in so far as it is relevant to human brains.

Similarly, if a computational model can illustrate an algorithm for storing knowledge in distributed units, it is only as relevant as it is similar to how humans store knowledge.

It follows from this point that there is a certain amount of uncertainty involved in any non-human research. An experiment can be brilliantly executed, but does it apply to humans?

Circumventing this uncertainty problem by looking directly at humans, another issue arises:  Only non-invasive techniques can be used with humans, and those techniques tend to involve the most uncertainty.

For instance, fMRI is a non-invasive technique that can be used to measure brain processes in humans. However, it measures the oxygenation levels, which is only indirectly related to neural activity. Thus, unlike with animal models, measures of neuronal activity are surrounded by an extra layer of uncertainty in humans.

So, if you're a neuroscientist you have to "choose your poison": Either deal with the uncertainty of relevance to humans, or deal with the uncertainty of the processes underlying the measurable signals in humans.

Read more

How Hangovers Work

Anatomy of a HangoverI thought this article at Howstuffworks was appropriate just after the all day drinking fest that is St. Patrick's Day for many.

According to the article, a hangover from a heavy night (and/or day) of drinking is mainly due to dehydration.

The dehydration process begins with a chemical reaction in the brain; specifically the pituitary gland. This reaction causes less vasopressin to be released from the pituitary gland, which in turn causes the kidneys to send water directly into the bladder (rather than reabsorbing it).

So why do hangovers cause headaches? Apparently the massive amount of dehydration by the morning causes the body's organs to steal water from the brain.

According to the article, this causes "the brain to decrease in size and pull on the membranes that connect the brain to the skull, resulting in pain". This cannot be good for neuronal health!

Based on this information it seems that the best way to avoid a hangover is to drink plenty of water along with those heavy booze. There are other things (e.g. electrolytes) that are lost along with H2O, however.

I'm not convinced that these really work, but some hangover prevention pills on the market may help to avoid losing these essential chemicals.

Each type of drink has a different kind of hangover associated with it, according to this article. Red wine and dark liquors have the worst side effects, while vodka is the least likely to cause a hangover.

If these articles are right, a good way to drink without getting a hangover is to take shots of water between shots of vodka (no one could tell the difference!), and maybe add a little orange juice (making a screwdriver) to add some electrolytes back into the mix. Anyone care to test out this theory…?

-MC

Demystifying the Brain

Most neuroscience writing touts statements like 'the human brain is the most complex object in the universe'. This serves only to paint the brain as a mysterious, seemingly unknowable structure.

This is somehow comforting to some, but it's not for me. I want to understand this thing!

Here are some facts to demystify the brain (most points courtesy of Dave Touretzky and Christopher Cherniak):

  • The brain is very complex, but not infinitely so

  • A few spoonfuls of yogurt contain 1011 lactobaccillus bacteria: ten times the number of cortical neurons in a human brain
  • There are roughly 1013 synapses in cortex. Assume each stores one bit of information: that’s 1.25 terabytes.
    • Also, the Library of Congress (80 million volumes, average 300 typed pages each) contains about 48 terabytes of data.

  • Volume of the human brain: about 1.4 liters.
  • Number of neurons in a human brain: 1012. Number of neurons in a rat brain: 1010.
  • Number of neurons in human spinal cord: 109 [source]
  • Average loss of neocortical neurons = 1 per second; 85,000 per day; ~31 million (31×106) per year [source]
  • No neuron fires faster than 1 kHz
  • The total energy consumption of the brain is about 25 watts [source]
  • Average number of glial cells in brain = 10-50 times the number of neurons [source]
  • "For all our neurocomputational sophistication and processing power, we can barely attend to more than one object at a time, and we can hardly perform two tasks at once" [source]

What to take from all this? Simply that the brain is a real, physical object with real, physical limitations. As such, we really do have a chance to understand it.

Of course, if the brain is so limited, how can we expect our brains to be up to the task of understanding it?

We can, and we will. How can I be so sure? Because (as illustrated above) we have already built, and thus understand, computational devices that are rivaling and in some cases surpassing (e.g., in memory capacity) the computational powers of the brain.

This shows that we have what it takes to understand ourselves in the deepest way possible: by learning the neural mechanisms that make us who and what we are.

-MC 

Computational models of cognition in neural systems: WHY?

cajal1.png
In my most recent post I gave an overview of the "simple recurrent network" (SRN), but I'd like to take a step back and talk about neuromodeling in general.  In particular I'd like to talk about why neuromodeling is going to be instrumental in bringing about the cognitive revolution in neuroscience.

A principal goal of cognitive neuroscience should be to explain how cognitive phenomena arise from the underlying neural systems.  How do the neurons and their connections result in interesting patterns of thought?  Or to take a step up, how might columns, or nuclei, interact to result in problem solving skills, thought or consciousness?

If a cognitive neuroscientist believes they know how a neural system gives rise to a behavior, they should be able to construct a model to demonstrate how this is the case.

That, in brief, is the answer.

But what makes a good model?  I'll partially answer this question below, but in future posts I'll bring up specific examples of models, some good, some poor.

First, any "model" is a simplification of the reality.  If the model is too simple, it won't be interesting.  If it's too realistic, it will be too complex to understand.  Thus, a good model is at that sweet spot where it's as simple as possible but no simpler.
Second, a model whose ingredients spell out the result you're looking for won't be interesting.  Instead, the results should emerge from the combination of the model's resources, constraints and experience.

Third, a model with too many "free" parameters is less likely to be interesting.  So an important requirement is that the "constraints" should be realistic, mimicking the constraints of the real system that is being modeled.

A common question I have gotten is:  "Isn't a model just a way to fit inputs to outputs?  Couldn't it just be replaced with a curve fitter or a regression?"  Well, perhaps the answer should be yes IF you consider a human being to just be a curve fitting device. A human obtains inputs and generates outputs.  So if you wish to say that a model is just a curve fitter, I will say that a human is, too.

What's interesting about neural systems, whether real or simulated, is the emergence of complex function from seemingly "simple" parts.

In future posts, I'll talk more about "constraints" by giving concrete examples.  In the meantime, feel free to bring up any questions you have about the computational modeling of cognition.
-PL 

[Image by Santiago Ramon y Cajal, 1914.] 

Can a Neural Network be Free…

…from a knee-jerk reaction to its immediate input? Simple Recurrent Network

Although one of the first things that a Neuroscience student learns about is "reflex reactions" such as the patellar reflex (also known as the knee-jerk reflex), the cognitive neuroscientist is interested in the kind of processing that might occur between inputs and outputs in mappings that are not so direct as the knee-jerk reaction. 

An example of a system which is a step up from the knee-jerk reflex is in the reflexes of the sea slug named "Aplysia".  Unlike the patellar reflex, Aplysia's gill and siphon retraction reflexes seem to "habituate" over time — the original input-output mappings are overridden by being repeatedly stimulated.  This is a simple form of memory, but no real "processing" can be said to go on there.

Specifically, cognitive neuroscientists are interested in mappings where "processing" seems to occur before the output decision is made.  As MC pointed out earlier, the opportunity for memory (past experience) to affect those mappings is probably important for "free will". 

But how can past experience affect future mappings in interesting ways? One answer to this question appeared in the year 1990, which began a new era in experimentation with neural network models capable of indirect input-output mappings.  In that year, Elman (inpired by Jordan's 1986 work) demonstrated the Simple Recurrent Network in his paper "Finding Structure in Time".  The concept behind this network is shown in the picture associated with this entry.

The basic idea of the Simple Recurrent Network is that as information comes in (through the input units), an on-line memory of that information is preserved and recirculated (through the "context" units).  Together, the input and context units both influence the hidden layer which can trigger responses in the output layer.  This means that the immediate output of the network is dependent not only on the current input, but also on the inputs that came before it.

The most interesting aspect of the Simple Recurrent Network, however, is that the connections among all the individual units in the network change depending on what the modeler requires the network to output.   The network learns to preserve information in the context layer loops so that it can correctly produce the desired output. For example, if the task of the network is to remember the second word in a sentence, it will amplify or maintain the second word when it comes in, while ignoring the intervening words, so that at the end of the sentence it outputs the target word.

Although this network cannot be said to have "free" will — especially because of the way its connections are forcefully trained — its operation can hint at the type of phenomena researchers should seek in trying to understand cognition in neural systems.

-PL