Archive for the ‘Neurophysiology’ Category

The immune system of the mind is in frontoparietal cortex

Friday, July 25th, 2014

The frontoparietal control system is to the mind what the immune system is to the body. It may oversimplify the situation, but we’re finding it’s a useful metaphor nonetheless. Indeed, we’ve just published a new theory paper explaining that there is already an avalanche of evidence supporting this metaphor. Even though much work is left to fully establish the theory, we’re very excited about it as it appears it can explain much that is mysterious about mental illnesses. Most exciting is that the theory may unify understanding and treatment of all mental illnesses simultaneously.

Of course mental illnesses are very complex and problems in most parts of the brain can contribute. However, recent findings suggest a particular brain network may be especially important for predicting and curing mental disease: the frontoparietal control network (see yellow areas in figure). We and others have found that this network is not only highly connected to many other parts of the brain (i.e. its regions are hubs), but it also shifts that connectivity dynamically to specify the current task at hand. This means that any particular goal you are focusing on – such as solving a puzzle or finding food or cheering yourself up when you feel sad – will involve dynamic network interactions with this network (to maintain and implement that goal).

Applying this basic understanding of goal-directed thoughts and actions to mental illness, we realized deficits in this brain network may be critical for explaining many of the cognitive symptoms – such as the inability to concentrate on the task at hand – experienced across a wide variety of mental diseases. Further, we realized that many of the emotional symptoms of mental disease are indirectly affected by this network, since emotional regulation (e.g., reducing phobia-related anxiety) involves brain interactions with this network. This suggests this network may regulate symptoms and promote mental health generally, much like the body’s immune system regulates pathogens to promote physical health.

Another way to think of this is in terms of an interaction between a regulator like a thermostat and a thing to be regulated like the temperature in a room. Similar to the regulation of temperature, the frontoparietal system sets a goal to a range of distributed brain activity patterns (like setting the goal temperature on the thermostat), and the system searches for activity patterns that will make the dysfunctional brain activity patterns shift toward that goal.

As covered in our theory paper, it is well established that the frontoparietal system has all the key properties of a regulator: it maintains goal information, it has access to many other parts of the brain, and it affects distal parts of the brain according to the maintained goal. Further, there is evidence that things like emotional regulation during cognitive behavioral therapy increase activity in the frontoparietal system, suggesting this brain system is working harder when cognitive strategies are used to facilitate mental health.

Perhaps the most exciting prediction of this theory is that enhancing the frontoparietal system may reduce all symptoms of all mental diseases using a single treatment. This is because the frontoparietal system is domain general, meaning it directs goal-directed processes across all aspects of the mind and therefore all possible mental symptoms. In practice there will certainly be exceptions to this, yet simultaneous progress on reducing even just 50% of symptoms would be a tremendous advance.

How might we enhance the frontoparietal system? Perhaps using drugs that differentially influence this system (e.g., dopamine agonists) or direct stimulation of the system (e.g., using transcranial magnetic or current stimulation). Since the frontoparietal system can be reconfigured using verbal instructions, however, building on carefully structured talk therapies may be an especially specific and effective way. In particular, the frontoparietal system is known to implement rapid instructed task learning (RITL) – a way for the brain to implement novel behaviors based on instructions. Ultimately, this theory suggests the proper combination of frontoparietal system enhancement through direct influence (drugs and/or stimulation), talk therapy, and symptom-specific interventions will allow us to make major progress toward curing a wide variety of mental diseases.

MWCole

Cingulate Cortex and the Evolution of Human Uniqueness

Thursday, November 12th, 2009

Figuring out how the brain decides between two options is difficult. This is especially true for the human brain, whose activity is typically accessible only via the small and occasionally distorted window provided by new imaging technologies (such as functional MRI (fMRI)).

In contrast, it is typically more accurate to observe monkey brains since the skull can be opened and brain activity recorded directly.

Despite this, if you were to look just at the human research, you would consider it a fact that the anterior cingulate cortex (ACC) increases its activity during response conflict. The thought is that this brain region detects that you are having trouble making decisions, and signals other brain regions to pay more attention.

If you were to only look at research with monkeys, however, you would think otherwise. No research with macaque monkeys (the ‘non-human primate’ typically used in neuroscience research) has found conflict activity in ACC.

My most recent publication looks at two possible explanations for this discrepancy: 1) Differences in methods used to study these two species, and 2) Fundamental evolutionary differences between the species.

(more…)

Levels of Analysis and Emergence: The Neural Basis of Memory

Friday, May 30th, 2008

A square 'emerges' from its surroundings (at least in our visual system)Cognitive neuroscience constantly works to find the appropriate level of description (or, in the case of computational modeling, implementation) for the topic being studied.  The goal of this post is to elaborate on this point a bit and then illustrate it with an interesting recent example from neurophysiology.

As neuroscientists, we can often  choose to talk about the brain at any of a number of levels: atoms/molecules, ion channels and other proteins, cell compartments, neurons, networks, columns, modules, systems, dynamic equations, and algorithms.

However, a description at too low a level might be too detailed, causing one to lose the forest for the trees.  Alternatively, a description at too high a level might miss valuable information and is less likely to generalize to different situations.

For example, one might theorize that cars work by propelling gases from their exhaust pipes.  Although this might be consistent with all of the observed data, by looking “under the hood” one would find evidence that this model of a car’s function is incorrect.

(more…)

Grand Challenges of Neuroscience: Day 4

Saturday, July 7th, 2007

After a bit of a hiatus, I'm back with the last three installments of "Grand Challenges in Neuroscience". picture-1.png

Topic 4: Time

Cognitive Science programs typically require students to take courses in Linguistics (as well as in the philiosphy of language).  Besides the obvious application of studying how the mind creates and uses language, an important reason for taking these courses is to realize the effects of using words to describe the mental, cognitive states of the mind.

In fact — after having taken courses on language and thought, it seems that it would be an interesting coincidence if the words in any particular language did map directly onto mental states or brain areas.  (As an example, consider that the amygdala is popularly referred to as the "fear center".) 

It seems more likely that mental states are translated on the fly into language, which only approximates their true nature.  In this respect, I think it's important to realize that time may be composed of several distinct subcomponents, or time may play very different roles in distinct cognitive processes.

Time. As much as it is important to have an objective measure of time, it is equally important to have an understanding of our subjective experience of time.  A number of experimental results have confirmed what has been known to humanity for some time: Time flies while you're having fun, but a watched pot never boils.   
Time perception strongly relates cognition, attention and reward.  The NSF committee proposed that understanding time is going to be integrative, involving brain regions whose function is still not understood at a "systems" level, such as the cerebellum, basal ganglia, and association cortex.  

Experiments?

The NSF committee calls for the develpoment of new paradigms for the study of time.  I agree that this is critical.  To me, one of the most important issues is the dissociation of reward from time (e.g., "time flies when your having fun"):  most tasks involving time perception in both human and non-human primates involved rewarding the participants. 

In order to get a clearer read on the neurobiology of time perception and action, we need to observe neural representations that are not colored by the anticipation of reward.

-PL 

Brain image from http://www.cs.princeton.edu/gfx/proj/sugcon/models/
Clock image from http://elginwatches.org/technical/watch_diagram.html

Grand Challenges of Neuroscience: Day 3

Sunday, May 13th, 2007

Topic 3: Spatial Knowledgeskaggs96figure3.png

Animal studies have shown that the hippocampus contains special cells called "place cells".  These place cells are interesting because their activity seems to indicate not what the animal sees, but rather where the animal is in space as it runs around in a box or in a maze. (See the four cells in the image to the right.)

Further, when the animal goes to sleep, those cells tend to reactivate in the same order they did during wakefulness.  This apparent retracing of the paths during sleep has been termed "hippocampal replay".

More recently, studies in humans — who have deep microelectrodes implanted to help detect the origin of epileptic seizures — have shown place-responsive cells.  Place cells in these studies were found not only in the human hippocampus but also in nearby brain regions.

The computation which converts sequences of visual and other cues into a sense of "place" is a very interesting one that has not yet been fully explained.  However, there do exist neural network models of the hippocampus that, when presented with sequences, exhibit place-cell like activity in some neurons.

The notion of place cell might also extend beyond physical space.  It has been speculated that computations occur to convert sequences events and situations into a distinct sense of "now".  And indeed, damage to the hippocampus has been found not only to impair spatial memory but also "episodic" memory, the psychological term for memory for distinct events.

Experiments? 

How can we understand the ways in which we understand space? Understanding spatial knowledge seems more tangible than understanding the previous two topics in this series. It seems that researchers are already using some of the most effective methods to tackle the problem.

First, the use of microelectrodes throughout the brain while human participants play virtual taxi games and perform problem solving tasks promises insight into this question.  Second, computational modeling of regions (e.g., the hippocampus) containing place cells should help us understand their properties and how they emerge.  Finally, continued animal research and possibly manipulation of place cells in animals to influence decision making (e.g., in a T-maze task) may provide an understanding of how spatial knowledge is used on-line. 

-PL 

Grand Challenges of Neuroscience: Day 1

Monday, April 30th, 2007

Following up on MC's posts about the significant insights in the history of neuroscience, I'll now take Neurevolution for a short jaunt into neuroscience's potential future.

In light of recent advances in technologies and methodologies applicable to neuroscience research, the National Science Foundation last summer released a document on the "Grand Challenges of Neuroscience".  These grand challenges were identified by a committee of leading members of the cognitive neuroscience community.

The document, available at http://www.nsf.gov/sbe/grand_chall.pdf, describes six domains of research the committee deemed to be important for progress in understanding the relationship between mind and brain.

Over the next few posts, I will discuss each of the research domains and explain in layperson's terms why these questions are interesting and worth pursuing.  I'll also describe potential experimental approaches to address these questions in a cognitive neuroscience framework.

Topic 1:  "Adaptive Plasticity"

One research topic brought up by the committee was that of adaptive plasticity.  In this context, plasticity refers to the idea that the connections in the brain, and the behavior governed by the brain, can be changed through experience and learning.  

Learning allows us to adapt to new circumstances and environments.  Arguably, understanding how we learn and how to improve learning could be one of the greatest contributions of neuroscience.

Although it is widely believed that memory is based on the synaptic changes that occur during long-term potentiation and long-term depression (see our earlier post) this has not been conclusively shown!

What has been shown is that drugs that prevent synaptic changes also prevent learning.  However, that finding only demonstrates a correlation between synaptic change and memory formation,  not causation. (For example, it is possible that those drugs are interfering with some other process that truly underlies memory.)

The overarching question the committee raises is: What are the rules and principles of neural plasticity that implement [the] diverse forms of memory?

This question aims to quantify the exact relationships between changes at the neuronal level and at the level of behavior.  For instance, do rapid changes at the synapse reflect rapid learning?  And, how do the physical limitations on the changes at the neuronal level relate to cognitive limitations at the behavioral level?

Experiments?
My personal opinion is that the answers to these questions will be obtained through new experiments that either implant new memories or alter existing ones (e.g., through electrical stimulation protocols). 

There is every indication that experimenters will soon be able to select and stimulate particular cells in an awake, behaving animal to alter the strength of the connection between those cells.  The experimenters can then test the behavior of the animals to see if their memory for the association that might be represented by that connection has been altered.

-PL 

History’s Top Brain Computation Insights: Day 25

Thursday, April 26th, 2007

Dopamine signal related to reward and reward prediction (Schultz, 1999)25) The dopamine system implements a reward prediction error algorithm (Schultz – 1996, Sutton – 1988)

It used to be that the main thing anyone "knew" about the dopamine system was that it is important for motor control.  Parkinson's disease, which visibly manifests itself as motor tremors, is caused by disruption of the dopamine system (specifically, the substantia nigra), so this was an understandable conclusion.

When Wolfram Schultz began recording from dopamine neurons in mice and monkeys he was having trouble finding correlations with his motor task. Was he doing something wrong? Was he recording from the right cells?

Instead of towing the line of dopamine = motor control he set out to find out what this system really does. It turns out that it is related to reward.

Schultz observed dopamine cell bursting at the onset of unexpected reward. He also observed that this bursting shifts to a cue (e.g., a bell sound) indicating a reward is forthcoming. When the reward cue occurs but no reward follows he saw that the dopamine cells go silent (below resting firing rate).

This pattern is quite interesting computationally. The dopamine signal mimics the error signal in a form of reinforcement learning called temporal difference learning.

This form of learning was originally developed by Sutton. It is a powerful algorithm for learning to predict reward and learn from errors in attaining reward.

Temporal difference learning basically propagates reward prediction back in time as far as possible, thus facilitating the process of attaining reward in the future.

Figure: (Top) No conditioned stimulus cue is given, so the reward is unexpected and there is a big dopamine burst. (Middle) The animal learns to predict the reward based on the cue and the dopamine burst moves to the cue. (Bottom) The reward is predicted, but since no reward occurs there is a depression in dopamine release.
Source: Figure 2 of Schultz, 1999. (News in Physiological Sciences, Vol. 14, No. 6, 249-255, December 1999)

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, dopamine signals for reinforcement learning, and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 24

Wednesday, April 25th, 2007

Cognitive control network (Cole & Schneider, 2007)24) Cognitive control processes are distributed within a network of distinct regions (Goldman-Rakic – 1988, Posner – 1990, Wager & Smith 2004, Cole & Schneider – 2007)

Researchers investigating eye movements and attention recorded from different parts of the primate brain and found several regions showing very similar neural activity. Goldman-Rakic proposed the existence of a specialized network for the control of attention.

This cortical system consists of the lateral frontal cortex (fronto-polar, dorsolateral, frontal eye fields), medial frontal cortex (anterior cingulate, pre-SMA, supplementary eye fields), and posterior parietal cortex. Subcortically, dorsomedial thalamus and superior colliculus are involved, among others.

Many computational modelers emphasize the emergence of attention from the local organization of sensory cortex (e.g., local competition). However, when a shift in attention is task-driven (i.e., top-down) then it appears that a specialized system for attentional control drives activity in sensory cortex. Many properties of attention likely arise from the organization of sensory cortex, but empirical data indicate that this is not sufficient.

With the advent of neuroimaging in humans (PET and fMRI), Posner et al. found very similar regions as those reported by Goldman-Rakic. He found that some regions are related more to orienting to stimuli, while others are related more to cognitive control (i.e., controlled processing).

After many fMRI studies of cognitive control were published, Wager et al. performed a meta-analysis looking at most of this research. They found a set of cortical regions active in nearly all cognitive control tasks.

My own work with Schneider (in press) indicates that these regions form an innate network, which is better connected than the rest of cortex on average. We used resting state correlations of fMRI BOLD activity to determine this. This cognitive control network is involved in controlled processing in that it has greater activity early in practice relative to late in practice, and has greater activity for conflicting responses (e.g., the Stroop task).

Though these regions have similar responses, they are not redundant. Our study showed that lateral prefrontal cortex is involved in maintaining relevant task information, while medial prefrontal cortex is involved in preparing and making response decisions. In most cases these two cognitive demands are invoked at the same time; only by separating them in time were we able to show specialization within the cognitive control network. We expect that other regional specializations will be found with more work.

I'll be covering my latest study in more detail once it is published (it has been accepted for publication at NeuroImage and should be published soon). The above figure is from that publication. It lists the six regions within the human cognitive control network. These regions include dorsolateral prefrontal cortex (DLPFC), inferior frontal junction (IFJ), dorsal pre-motor cortex (dPMC), anterior cingulate / pre-supplementary motor area (ACC/pSMA), anterior insula cortex (AIC), and posterior parietal cortex (PPC).

A general computational insight arising from this work (starting with Goldman-Rakic) is that cortex is composed of specialized regions that form specialized networks. This new paradigm for viewing brain function weds the old warring concepts of localized specialization and distributed function.

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 23

Tuesday, April 24th, 2007

23) Motor cortex is organized by movement direction (Schwartz  & Georgopoulos – 1986, Schwartz – 2001)

Penfield had shown that motor cortex is organized in a somatotopic map. However, it was unclear how individual neurons are organized. What does each neuron's activity represent? The final location of a movement, or the direction of that movement?

Shwartz & Georgopoulos found that movement direction correlated best with neural activity. Further, they discovered that the neurons use a population code to specify a given movement. Thus, as illustrated above, a single neuron responds to a variety of movement direction but has one preferred direction (PD).

The preferred direction code across a large population of neurons thus sums to specify each movement.

Schwartz has since demonstrated how these population vectors can be interpreted by a computer to control a prosthetic arm. He has used this to imbue monkeys with Jedi powers; able to move a prosthetic limb in another room (or attached) with only a thought. Using this technology the Schwartz team hopes to allow amputee humans to control artificial limbs as they once did their own.

A general computational insight one might take from the Schwartz & Georgopoulos work is the possibility of population coding across cortex. It appears that all perception, semantics, and action may be coded as distributed population vectors.

Representational specificity within these vectors likely arises from conjunctions of receptive fields, and are dominated by those receptive fields most specific to each representation. 

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 20

Saturday, April 21st, 2007

Spike-timing dependent plasticity (Bi & Poo 1998; Markram et al. 1997)20) Spike-timing dependent plasticity: Getting the brain from correlation to causation (Levy – 1983, Sakmann – 1994, Bi & Poo – 1998, Dan – 2002)

Hebb's original proposal was worded as such: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." [emphasis added]

The phrase "takes part in firing" implies causation of B's activity via A's activity, not simply a correlation of the two.

There are several ways to go beyond correlation to infer causation. One method is to observe that one event (e.g., cell A's activity) comes just before the caused event (e.g., cell B's activity).

In 1983 Levy showed with hippocampal slices that electrically stimulating cell A to fire before cell B will cause long-lasting strengthening of the synapse from cell A to cell B.  However, when the opposite occurs, and cell A is made to fire after cell B, there is depotentiation of the same synapse. In other words, timing is essential for synaptic learning. Today, this form of learning is called spike-timing dependent plasticity (STDP).

Using this rule, Levy has created a variety of neural network models aimed at understanding  memory in the brain (e.g., especially in the hippocampus; see this paper for a short review). 

More recently, other researchers including Sakmann, Bi, Poo, and Dan have further characterized this phenomenon. They showed that it occurs in vivo, within a specific time window (~8 msec timing difference is optimal), in neocortex, and (using behavioral evidence) in humans.

Figure caption: A) Figure from Bi & Poo (1998) showing the effects of STDP in potentiation and depotentiation with optimal results ~8-10ms in either direction. B) Figure from Markram et al. (1997) showing the timing of the stimulation relative to the post-synaptic cell's EPSP. C) Another figure from Markram et al. (1997) showing the resulting long-term changes in synaptic efficacy due to the manipulations in figure B.

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by timing-dependent correlated activity. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g.,short-term: prefrontal, long-term: temporal),which depend on inter-regional communication for functional integration.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC & PL

History’s Top Brain Computation Insights: Day 19

Friday, April 20th, 2007

Center-surround organization used in SOMs19) Neural networks can self-organize via competition (Grossberg – 1978, Kohonen – 1981)

Hubel and Wiesel's work with the development  of cortical columns (see previous post) hinted at it, but it wasn't until Grossberg and Kohonen built computational architectures explicitly exploring competition that its importance was made clear.

Grossberg was the first to illustrate the possibility of self-organization via competition. Several years later Kohonen created what is now termed a Kohonen network, or self-organizing map (SOM). This kind of network is composed of layers of neuron-like units connected with local excitation and, just outside that excitation, local inhibition. The above figure illustrates this 'Mexican hat' function in three dimensions, while the figure below represents it in two dimensions along with its inputs.

These networks, which implement Hebbian learning, will spontaneously organize into topographic maps.

For instance, line orientations that are similar to each other will tend to be represented by nearby neural units, while less similar line orientations will tend to be represented by more distant neural units. This occurs even when the map starts out with random synaptic weights. Also, this spontaneous organization will occur for even very complex stimuli (e.g., faces) as long as there are spatio-temporal regularities in the inputs.

Another interesting feature of Kohonen networks is that the more frequent input patterns are represented by larger areas in the map. This is consistent with findings in cortex, where more frequently used representations have larger cortical areas dedicated to them.

There are several computational advantages to having local competition between similar stimuli, which SOMs can provide.

One such advantage is that local competition can increase specificity of the representation by ruling out close alternatives via lateral inhibition. Using this computational trick, the retina can discern visual details better at the edges of objects (due to contrast enhancement).

Another computational advantage is enhancement of what's behaviorally important relative to what isn't. This works on a short time-scale with attention (what's not important is inhibited), and on a longer time-scale with increases in representational space in the map with repeated use, which increases representational resolution (e.g., the hand representation in the somatosensory homonculus).

You can explore SOMs using Topographica, a computational modeling environment for simulating topographic maps in cortex. Of special interest here is the SOM tutorial available at topographica.org.


Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g.,short-term: prefrontal, long-term: temporal),which depend on inter-regional communication for functional integration.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 14

Sunday, April 15th, 2007

Neuron staining illustrating column structure in cortex14) Neocortex is composed of columnar functional units (Mountcastle – 1957, Hubel & Wiesel – 1962)

Mountcastle found that nearby neurons in monkey somatosensory cortex tend to activate for similar sensory experiences. For example, a neuron might respond best to a vibration of the right index finger tip, while a neuron slightly deeper in might respond best to a vibration of the middle of that finger.

The neurons with these similar 'receptive fields' are organized vertically in cortical columns. Mountcastle distinguished between mini-columns, the basic functional unit of cortex, and hyper-columns, which are functional aggregates of about 100 mini-columns.

Hubel & Wiesel expanded Mountcastle's findings to visual cortex, discovering mini-columns showing line orientation selectivity and hyper-columns showing ocular dominance (i.e., receptive fields for one eye and not the other). The figure below illustrates a typical spatial organization of orientation columns in occipital cortex (viewed from above), along with the line orientations corresponding to each color patch.

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. The cortex, a part of that organ composed of functional column units whose spatial dedication determines representational resolution, is involved in perception (e.g., touch: parietal lobe, vision: occipital lobe), action (e.g., frontal lobe), and memory (e.g., temporal lobe).

Orientation selective cortical columns 

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries.]

-MC

History’s Top Brain Computation Insights: Day 13

Saturday, April 14th, 2007

A small man whose proportions represent the amount of cortical space dedicated to each body part13) Larger cortical space is correlated with greater representational resolution; memories are stored in cortex (Penfield – 1957)

Prior to performing surgery, Wilder Penfield electrically stimulated epileptic patients' brains while they were awake. He found the motor and somatosensory strips along the central sulcus, just as was found in dogs by Fitsch & Hitzig (see previous post). The amount of cortex dedicated to a particular part of the body was proportional not to that body part's size, but to its fine control (for motor cortex) or its sensitivity (for somatosensory cortex).

Thus, the hands and lips have very large cortical spaces relative to their size, while the back has a very small cortical space relative to its size. The graphical representation of this (see above) is called the 'homunculus', or 'little man'.

Penfield also found that stimulating portions of the temporal lobe caused the patients to vividly recall old memories, suggesting that memories are transfered from the archicortical hippocampus into neocortical temporal lobes over time.

Implication:  The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. The cortex, a part of that organ whose spatial dedication determines representational resolution, is involved in perception (e.g., touch: parietal lobe, vision: occipital lobe), action (e.g., frontal lobe), and memory (e.g., temporal lobe).

 

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries.]

-MC

History’s Top Brain Computation Insights: Day 11

Thursday, April 12th, 2007

Neuron showing sodium and potasium concentration changes11) Action potentials, the electrical events underlying brain communication, are governed by ion concentrations and voltage differences mediated by ion channels (Hodgkin & Huxley – 1952)

Hodgkin & Huxley developed the voltage clamp, which allows ion concentrations in a neuron to be measured with the voltage constant. Using this device, they demonstrated changes in ion permeability at different voltages. Their mathematical model of neuron function, based on the squid giant axon, postulated the existence of ion channels governing the action potential (the basic electrical signal of neurons). Their model has been verified, and is amazingly consistent across brain areas and species.

You can explore the Hodgkin & Huxley model by downloading Dave Touretsky's HHSim, a computational model implementing the Hodgkin & Huxley equations.

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 10

Wednesday, April 11th, 2007

Hebbian reverbatory cell assembly 10) The Hebbian learning rule: 'Neurons that fire together wire together' [plus corollaries] (Hebb – 1949)

D. O. Hebb's most famous idea, that neurons with correlated activity increase their synaptic connection strength, was based on the more general concept of association of correlated ideas by philosopher David Hume (1739) and others. Hebb expanded on this by postulating the 'cell assembly', in which networks of neurons representing features associate to form distributed chains of percepts, actions, and/or concepts.

Hebb, who was a student of Lashley (see previous post), followed in the tradition of distributed processing (discounting localizationist views).

The above figure illustrates Hebb's most original hypothesis (which is yet to be proven): The reverbatory cell assembly formed via correlated activity. Hebb theorized that increasing connection strength due to correlated activity would cause chains of association to form, some of which could maintain subsequent activation for some period of time as a form of short term memory (due to autoassociation).

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via convergent and divergent synaptic connections strengthened by correlated activity.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC