History’s Top Brain Computation Insights: Day 14

Neuron staining illustrating column structure in cortex14) Neocortex is composed of columnar functional units (Mountcastle – 1957, Hubel & Wiesel – 1962)

Mountcastle found that nearby neurons in monkey somatosensory cortex tend to activate for similar sensory experiences. For example, a neuron might respond best to a vibration of the right index finger tip, while a neuron slightly deeper in might respond best to a vibration of the middle of that finger.

The neurons with these similar 'receptive fields' are organized vertically in cortical columns. Mountcastle distinguished between mini-columns, the basic functional unit of cortex, and hyper-columns, which are functional aggregates of about 100 mini-columns.

Hubel & Wiesel expanded Mountcastle's findings to visual cortex, discovering mini-columns showing line orientation selectivity and hyper-columns showing ocular dominance (i.e., receptive fields for one eye and not the other). The figure below illustrates a typical spatial organization of orientation columns in occipital cortex (viewed from above), along with the line orientations corresponding to each color patch.

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. The cortex, a part of that organ composed of functional column units whose spatial dedication determines representational resolution, is involved in perception (e.g., touch: parietal lobe, vision: occipital lobe), action (e.g., frontal lobe), and memory (e.g., temporal lobe).

Orientation selective cortical columns 

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries.]

-MC

History’s Top Brain Computation Insights: Day 13

A small man whose proportions represent the amount of cortical space dedicated to each body part13) Larger cortical space is correlated with greater representational resolution; memories are stored in cortex (Penfield – 1957)

Prior to performing surgery, Wilder Penfield electrically stimulated epileptic patients' brains while they were awake. He found the motor and somatosensory strips along the central sulcus, just as was found in dogs by Fitsch & Hitzig (see previous post). The amount of cortex dedicated to a particular part of the body was proportional not to that body part's size, but to its fine control (for motor cortex) or its sensitivity (for somatosensory cortex).

Thus, the hands and lips have very large cortical spaces relative to their size, while the back has a very small cortical space relative to its size. The graphical representation of this (see above) is called the 'homunculus', or 'little man'.

Penfield also found that stimulating portions of the temporal lobe caused the patients to vividly recall old memories, suggesting that memories are transfered from the archicortical hippocampus into neocortical temporal lobes over time.

Implication:  The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. The cortex, a part of that organ whose spatial dedication determines representational resolution, is involved in perception (e.g., touch: parietal lobe, vision: occipital lobe), action (e.g., frontal lobe), and memory (e.g., temporal lobe).

 

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries.]

-MC

History’s Top Brain Computation Insights: Day 12

Transverse section showing a superior view of the hippocampus of each hemisphere12) Hippocampus is necessary for episodic memory formation (Milner – 1953)

Patient H.M. had terrible epilepsy originating in the medial temporal lobes. His neurosurgeon decided to take out the source of the epilepsy: the hippocampus. Surprisingly, after the operation H.M. could no longer form new long-term memories. He could remember things for short time periods if he wasn't distracted (i.e., he had near normal working memory). Also, he could learn new sensory-motor skills, though he could not recall how he learned them.

Patient H.M. is still alive today, and has no new episodic memories since the early 1950s. He still thinks he's in his twenties, and meets his doctors anew each day.

Some have compared his experience with those dramatized in the movie Memento.

Implication:  The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. A part of that organ, the medial temporal lobe, is essential for memory formation.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 11

Neuron showing sodium and potasium concentration changes11) Action potentials, the electrical events underlying brain communication, are governed by ion concentrations and voltage differences mediated by ion channels (Hodgkin & Huxley – 1952)

Hodgkin & Huxley developed the voltage clamp, which allows ion concentrations in a neuron to be measured with the voltage constant. Using this device, they demonstrated changes in ion permeability at different voltages. Their mathematical model of neuron function, based on the squid giant axon, postulated the existence of ion channels governing the action potential (the basic electrical signal of neurons). Their model has been verified, and is amazingly consistent across brain areas and species.

You can explore the Hodgkin & Huxley model by downloading Dave Touretsky's HHSim, a computational model implementing the Hodgkin & Huxley equations.

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 10

Hebbian reverbatory cell assembly 10) The Hebbian learning rule: 'Neurons that fire together wire together' [plus corollaries] (Hebb – 1949)

D. O. Hebb's most famous idea, that neurons with correlated activity increase their synaptic connection strength, was based on the more general concept of association of correlated ideas by philosopher David Hume (1739) and others. Hebb expanded on this by postulating the 'cell assembly', in which networks of neurons representing features associate to form distributed chains of percepts, actions, and/or concepts.

Hebb, who was a student of Lashley (see previous post), followed in the tradition of distributed processing (discounting localizationist views).

The above figure illustrates Hebb's most original hypothesis (which is yet to be proven): The reverbatory cell assembly formed via correlated activity. Hebb theorized that increasing connection strength due to correlated activity would cause chains of association to form, some of which could maintain subsequent activation for some period of time as a form of short term memory (due to autoassociation).

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via convergent and divergent synaptic connections strengthened by correlated activity.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 9

A network that can compute the XOR logic gate9) Convergence and divergence between layers of neural units can perform abstract computations (Pitts & McCulloch – 1947)

Pitts & McCulloch created the first artificial neurons and artificial neural network. In 1943 they showed that computational processing could be performed by a series of convergent and divergent connections among neuron-like units. In 1947 they demonstrated that such computations could lead to visual constancy, in which a network could recognize visual inputs despite changes in orientation or size. This computation is relevant for many topics besides vision.

More profound than the visual constancy network was the proof of concept it represented. As illustrated in the above figure, multi-layered perceptrons (as networks of converging and diverging neuron-like units came to be known) can compute logical functions such as AND, OR, and XOR.

This insight provided the first clear glimpse of how actual computation might be carried out in the brain via the many convergent and divergent connections already found in its anatomical projections.

Implication:  The mind, largely governed by reward-seeking behavior, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via convergent and divergent synaptic connections.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 8

A dog being trained to jump on command over the course of 20 minutes8) Reward-based reinforcement learning can explain much of behavior (Skinner – 1938, Thorndike – 1911, Pavlov – 1905)

B. F. Skinner showed that reward governs much of human and animal behavior. He discovered operant conditioning, a method for manipulating behavior so powerful he could teach a pigeon to bowl (or a dog to jump on command; see figure). This was an expansion of Thorndike's Law of Effect, which says that behaviors associated with a reward will be reinforced, while behaviors associated with no reward will not. Both Skinner and Thorndike expanded on earlier work by Pavlov, which showed that some reflexes can be conditioned using paired stimulus presentation (e.g., a bell with food later causes salivation to a bell alone).

Implication: The mind, largely governed by reward-seeking behavior, is implemented in an organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via electro-chemical synaptic connections.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 7

The fluid from the donor heart, which was just stimulated, causes the recipient heart to slow down 7) Brain signals are chemical (Dale – 1914, Loewi – 1921)

Loewi found that electrically stimulating a heart causes it to release a chemical substance which changes the beating of a different heart when exposed to that chemical substance. Dale had already discovered neurotransmitters, one of which (acetylcholine) was the chemical responsible for the change in heart rate. This was eventually generalized to the rest of the nervous system.

Implication: The mind is implemented in an organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via electro-chemical synaptic connections. 

 A chemical synapse depicting neurotransmitters crossing between the axon and dendrite

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 6

6) Neural networks consist of excitatory and inhibitory neurons connected by synapses (Sherrington – 1906)

Based on his observations in the spinal cord, Sherrington theorized that the brain consists of complex networks of excitatory and inhibitory cells he was the first to term 'neurons', with connection points he was the first to term 'synapses'. His theories turned out to be correct.

Sherrington's insight into the network nature of neural interactions is still at the cutting edge of neuroscience. It's ramifications are vast and complex, and we will still be working to understand the implications of this insight for decades to come.

Sherrington himself was able to prove that opposing muscles on the limbs inhibited each other via network interaction in the spinal cord. Also, it was found that timing of muscle movements is controlled by negative feedback via local neuronal networks. These spinal cord networks were an essential proof of concept for Sherrington, but they pale in comparison to the complexity of network interactions recently discovered in cortex, basal ganglia, and cerebellum.

Implication: The mind is implemented in an electric organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via synaptic connections.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 5

Drawing of a Purkinje cell by Ramon y Cajal 5) cheap jerseysNeurons are fundamental units of brain computation (Ramon y Cajal – 1889)

wholesale jerseys from china Golgi, a prominent 19th century biologist, argued that the brain is one unified reticulum (or web) of neural tissue, much like the circulatory system. However, Ramon y Cajal came to a very different conclusion using Golgi's very own silver chromate staining technique. He argued that this tissue web was composed of separate cells. Later studies showed (using the electron microscope) that Ramon y Cajal was correct.

Some have argued that Golgi was partially correct since electrical synapses (gap junctions) exist in small number in the brain. However, even with gap junctions the cells' plasma membranes separate the two sides of the synapse, which Golgi's theory did not predict.
Looking carefully at his stained neural cells, Ramon y Cajal postulated that nerve signals travel in one direction (from dendrite to axon). The dendrite (top), cell body (dark central spot), and axon (bottom) can be clearly distinguished in the included image of a drawing by Ramon y Cajal. He was unable to test this prediction himself, but he turned out to be correct once again.

Implication: The mind is implemented in an electric organ with distributed and modular function consisting of neural units.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC 

A Popular but Problematic Learning Rule: “Backpropogration of Error”

elman_treeBackpropogation of Error (or "backprop") is the most commonly-used neural network training algorithm.  Although fundamentally different from the less common Hebbian-like mechanism mentioned in my last post , it similarly specifies how the weights between the units in a network should be changed in response to various patterns of activity.   Since backprop is so popular in neural network modeling work, we thought it would be important to bring it up and discuss its relevance to cognitive  and computational neuroscience.  In this entry, I provide an overview of backprop and discuss what I think is the central problem with the algorithm from the perspective of  neuroscience.

Backprop is most easily understood as an algorithm which allows a network to learn to map an input pattern to an output pattern.  The two patterns are represented on two different "layers" of units, and there are connection weights that allow activity to spread from the "input layer" to the "output layer."  This activity may spread through intervening units, in which case the network can be said to have a multi-layered architecture.  The intervening layers are typically called "hidden layers".

Read more

History’s Top Brain Computation Insights: Day 4

Primary sensory-motor cortical regions 4) Functions can be localized in the brain (Bouillaud – 1825, Broca – 1861, Fritsch & Hitzig – 1870)

Bouillaud and Broca discovered patients with frontal cortex lesions who had speech problems. Fritsch & Hitzig discovered primary motor cortex; a specialized chunk of cortex specifically for motor control. Broca believed that all brain functions would be localized eventually, and held the brain area he discovered (now termed Broca's area) as an example. It was found that all sensory modalities have dedicated chunks of cortex, called primary sensory areas.

This localized function insight immediately clashed with the distributed function insight (see yesterday's post). Flourens and his followers were adamantly against the localizationist view championed by Broca and others.

The origin of the concept of localizing brain function, phrenology, added discredit to the practice. Phrenology claimed that differences in localized brain function were reflected in bumps on individuals' skulls. Once it was clear that phrenological findings were not replicated between individuals the practice was labeled a pseudoscience. Unfortunately, localization of function applied without the fallacies of phrenology was not spared the renewed skepticism of many scientists.

Today localization of function is well established, especially in primary sensory-motor regions. Functions have also been localized within association cortex, though much work remains in understanding how such localization arises. Insights within the last 20 years have led to a more sophisticated view of how functions arise in association cortex involving network interactions (see future insight posts).

Implication: The mind is implemented in an electric organ with distributed and modular function.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]
-MC

History’s Top Brain Computation Insights: Day 3

3)  Functions are distributed in the brain (Flourens – 1824, Lashley – 1929)

Flourens found that it did not matter where he lesioned inside cortex; what mattered was how much he lesioned. This suggested that functions were equally distributed (the law of equipotentiality) and widely distributed (the law of mass action) across cortex. Lashley updated this research by acknowledging localized functions in primary sensory-motor cortex (see tomorrow's entry), labeling the rest of cortex 'association cortex'.

Modern research has shown that Flourens and Lashley just weren't looking hard enough. Equipotentiality does not hold true (i.e, there is specialization within association cortex), but there is a great deal of mass action in the form of distributed network interaction across association cortex.

Implication: The mind is implemented in an electric organ with distributed function.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description)]

-MC

History’s Top Brain Computation Insights: Day 2

This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description).

2)  Brain signals are electrical (Galvani – 1791, Rolando – 1809)

Galvani (whose name inspired the word 'galvanize') discovered that electrically shocking frog nerves made their muscles move. Rolando used this same method to stimulate cortex in the brain.

As has often occurred in history, the latest and most mysterious new technology was used to explain brain function. Here, at the turn of the 19th century the brain was "explained" as a mysterious electrical device. Later it will be "explained" as a telegraph with its many communicators (neurons) and wires (axons). Then it was described as a telephone network, with switchboards acting to control the flow of information. Eventually, with the cognitive revolution, it was compared to a computer. Finally, today, its complex and distributed network architecture is compared to the internet.

Though each new comparison is always premature and inadequate, new insights nonetheless seem to derive from each analogy. What new technology of tomorrow will be used to describe brain function?

Implication: The mind is implemented in an electrical machine-like organ.

-MC 

History’s Top Brain Computation Insights: Day 1

It is hard to maintain historical perspective as neuroscience progresses. Today's complications and confusions seem to cloud the clear insights of the past. This is inevitable when trying to understand the brain, the most complex computational device known.

The plan here is to highlight history's major brain computational insights in the interest of integrating them into a single description of what we know to date.

This description will concisely summarize what science has learned about brain computation. Neuroscientists, artificial intelligence researchers, psychologists, and many others will hopefully find it helpful for gaining perspective and integrating concepts important for their research. Laymen will likely find it useful for learning the major findings in cognitive neuroscience.

The description will be necessarily biased by my own perspective, and will overlook many important contributions for those that clearly add to our understanding of how the brain computes behavior. Feel free to modify/add insights in the comments section.

The plan: One insight per day (in historically chronological order) for the next month, culminating in a single large post listing and integrating them all.

We start with a very simple, yet extremely important, insight… 

1) The brain computes the mind (Hippocrates- 460-379 B.C.)

It is ironic that some still wonder if there is anything more to the mind than the brain when a man at the dawn of civilization and science could figure it out.

"Men ought to know that from nothing else but thence [from the brain] come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations." – Hippocrates
One needs only to see a few brain damaged patients, try a drug (like alcohol), or see someone knocked out from a head/brain trauma to be convinced.

Implication: The mind is implemented in a biological organ.

-MC