Human Versus Non-Human Neuroscience

Most neuroscientists don't use human subjects, and many tend to forget this important point: 
All neuroscience with non-human subjects is theoretical.

If the brain of a mouse is understood in exquisite detail, it is only relevant (outside veterinary medicine) in so far as it is relevant to human brains.

Similarly, if a computational model can illustrate an algorithm for storing knowledge in distributed units, it is only as relevant as it is similar to how humans store knowledge.

It follows from this point that there is a certain amount of uncertainty involved in any non-human research. An experiment can be brilliantly executed, but does it apply to humans?

Circumventing this uncertainty problem by looking directly at humans, another issue arises:  Only non-invasive techniques can be used with humans, and those techniques tend to involve the most uncertainty.

For instance, fMRI is a non-invasive technique that can be used to measure brain processes in humans. However, it measures the oxygenation levels, which is only indirectly related to neural activity. Thus, unlike with animal models, measures of neuronal activity are surrounded by an extra layer of uncertainty in humans.

So, if you're a neuroscientist you have to "choose your poison": Either deal with the uncertainty of relevance to humans, or deal with the uncertainty of the processes underlying the measurable signals in humans.

Read more

How Hangovers Work

Anatomy of a HangoverI thought this article at Howstuffworks was appropriate just after the all day drinking fest that is St. Patrick's Day for many.

According to the article, a hangover from a heavy night (and/or day) of drinking is mainly due to dehydration.

The dehydration process begins with a chemical reaction in the brain; specifically the pituitary gland. This reaction causes less vasopressin to be released from the pituitary gland, which in turn causes the kidneys to send water directly into the bladder (rather than reabsorbing it).

So why do hangovers cause headaches? Apparently the massive amount of dehydration by the morning causes the body's organs to steal water from the brain.

According to the article, this causes "the brain to decrease in size and pull on the membranes that connect the brain to the skull, resulting in pain". This cannot be good for neuronal health!

Based on this information it seems that the best way to avoid a hangover is to drink plenty of water along with those heavy booze. There are other things (e.g. electrolytes) that are lost along with H2O, however.

I'm not convinced that these really work, but some hangover prevention pills on the market may help to avoid losing these essential chemicals.

Each type of drink has a different kind of hangover associated with it, according to this article. Red wine and dark liquors have the worst side effects, while vodka is the least likely to cause a hangover.

If these articles are right, a good way to drink without getting a hangover is to take shots of water between shots of vodka (no one could tell the difference!), and maybe add a little orange juice (making a screwdriver) to add some electrolytes back into the mix. Anyone care to test out this theory…?

-MC

Neural Network “Learning Rules”

picture-2.png

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I'll introduce some notions of how neural networks can learn.  Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability.

Let's begin with a theoretical question that is of general interest to cognition: how can a neural system learn sequences, such as the actions required to reach a goal? 

Consider a neuromodeler who hypothesizes that a particular kind of neural network can learn sequences. He might start his modeling study by "training" the network on a sequence. To do this, he stimulates (activates) some of its neurons in a particular order, representing objects on the way to the goal. 

After the network has been trained through multiple exposures to the sequence, the modeler can then test his hypothesis by stimulating only the neurons from the beginning of the sequence and observing whether the neurons in the rest sequence activate in order to finish the sequence.

Successful learning in any neural network is dependent on how the connections between the neurons are allowed to change in response to activity. The manner of change is what the majority of researchers call "a learning rule".  However, we will call it a "synaptic modification rule" because although the network learned the sequence, it is not clear that the *connections* between the neurons in the network "learned" anything in particular.

The particular synaptic modification rule selected is an important ingredient in neuromodeling because it may constrain the kinds of information the neural network can learn.

There are many categories of mathematical synaptic modification rule which are used to describe how synaptic strengths should be changed in a neural network.  Some of these categories include: backpropgration of error, correlative Hebbian, and temporally-asymmetric Hebbian.

  • Backpropogation of error states that connection strengths should change throughout the entire network in order to minimize the difference between the actual activity and the "desired" activity at the "output" layer of the network.
  • Correlative Hebbian states that any two interconnected neurons that are active at the same time should strengthen their connections, so that if one of the neurons is activated again in the future the other is more likely to become activated too.
  • Temporally-asymmetric Hebbian is described in more detail in the example below, but essentially emphasizes the importants of causality: if a neuron realiably fires before another, its connection to the other neuron should be strengthened. Otherwise, it should be weakened. 

Why are there so many different rules?  Some synaptic modification rules are selected because they are mathematically convenient.  Others are selected because they are close to currently known biological reality.  Most of the informative neuromodeling is somewhere in between.

An Example

Let's look at a example of a learning rule used in a neural network model that I have worked with: imagine you have a network of interconnected neurons that can either be active or inactive.    If a neuron is active, its value is 1, otherwise its value is 0. (The use of 1 and 0 to represent simulated neuronal activity is only one of the many ways to do so; this approach goes by the name "McCulloch-Pitts").

Read more

Predicting Intentions: Implications for Free Will

News about a neuroimaging group's attempts to predict intentions hit the wire a few days ago. The major theme was how mindreading might be used for unethical purposes.

What about its more profound implications?

If your intentions can be predicted before you've even made a conscious decision, then your will must be determined by brain processes beyond your control. There cannot be complete freedom of will if I can predict your decisions before you do!

Dr. Haynes, the researcher behind this work, spoke at Carnegie-Mellon University last October. He explained that he could use functional MRI to determine what participants were going to decide several seconds before that decision was consciously made. This was a free choice task, in which the instruction was to press a button whenever the participant wanted. In a separate experiment the group could predict if a participant was going to add or subtract two numbers.

In a way, this is not very surprising. In order to make a conscious decision we must be motivated by either external or internal factors. Otherwise our decisions would just be random, or boringly consistent. Decisions in a free choice task are likely driven by a motivation to move (a basic instinct likely originating in the globus pallidus) and to keep responses spaced within a certain time window.

Would we have a coherent will if it couldn't be predicted by brain activity? It seems unlikely, since the conscious will must use information from some source in order to make a reasoned decision. Otherwise we would be incoherent, random beings with no reasoning behind our actions.

In other words, we must be fated to some extent in order to make informed and motivated decisions.

-MC