Several months ago I posted The Will to be Free, Part I. In that post I explained that memory is the key to free will. However, this insight isn’t quite satisfactory. We need three additional things to complete the picture: the ability to choose based on predictions, internal desires, and self-awareness. (A quick disclaimer: These ideas are all extremely speculative. I’ll probably test most of them at some point, but right now I’m just putting them out there to hopefully allow for refinement of these hypotheses.) First, the ability to choose based on predictions. As mentioned last time, free will comes down to decision making. Specifically it comes down to our ability to make a decision based on internal sources (or at least condoned by them), rather than external coercive forces. If we cannot predict the outcome of our decision with any certainty, then decision making is pointless. For instance, if no matter what I choose to order at dinner a random dish is served then I had no freedom to choose in the first place. Thus, our ability to predict is necessary for free will. What are these “internal sources” involved in decision making that I mentioned earlier? They are the second new idea needed to complete our picture of free will: desires. Read the rest of this entry »
I recently published my first primary-author research study (Cole & Schneider, 2007).
The study used functional MRI to discover a network of brain regions responsible for conscious will (i.e., cognitive control). It also revealed the network’s specialized parts, which each uniquely contribute to creating the emergent property of conscious will.
I believe this research contributes substantially to our understanding of how we control our own thoughts and actions based on current goals. Much remains a mystery, but this study clearly shows the existence of a functionally integrated yet specialized network for cognitive control.
What is cognitive control? It is the set of brain processes necessary for goal-directed thought and action. Remembering a phone number before dialing requires cognitive control. Also, anything outside routine requires cognitive control (because it’s novel and/or conflicting with what you normally do). This includes, among other things, voluntarily shifting attention and making decisions.
What brain regions are involved? A mountain of evidence is accumulating that a common set of brain regions are involved in cognitive control. We looked for these regions specifically, and verified that they were active during our experiment [see top figure]. The brain regions are spread across the cortex, from the front to the back to either side. However, it’s not the whole brain: there are distinct parts that are involved in cognitive control and not other behavioral demands. Read the rest of this entry »
Here’s an exchange of emails between PL and MC on a recently published paper (Balleine et al., 2007).
Email 1 (from PL):
Have a look at this introductory paragraph from a recent (Aug 2007) J Neurosci article by Balleine, Delgado and Hikosaka. What do they mean by “cognition” here?
The Role of the Dorsal Striatum in Reward and Decision-Making
To choose appropriately between distinct courses of action requires the ability to integrate an estimate of the causal relationship between an action and its consequences, or outcome, with the value, or utility, of the outcome. Any attempt to base decision-making solely on cognition fails fully to determine action selection because any information, such as “action A leads to outcome O,” can be used both to perform A and to avoid performing A. It is interesting to note in this context that, although there is an extensive literature linking the cognitive control of executive functions specifically to the prefrontal cortex (Goldman-Rakic, 1995; Fuster, 2000), more recent studies suggest that these functions depend on reward-related circuitry linking prefrontal, premotor, and sensorimotor cortices with the striatum (Chang et al., 2002; Lauwereyns et al., 2002; Tanaka et al.,2006).
Email 2 (from MC):
It sounds like they are distinguishing cognition from reward processing. I’m not sure why, since ‘cognition’ typically encompasses reward processing now days.
The distinction I think they’re really trying to make is between cognitive control and reward processing. Given that, it’s still a ridiculous paragraph. Why must it be either cognitive control or reward processing? It’s likely (no, virtually certain!) that the two interact during reward-based decision making. For instance, O’Reilly’s stuff shows how this might happen.
Another problem with this paragraph: They equate causal knowledge with cognitive control. Well-known causal knowledge doesn’t involve cognitive control! For instance, routine decision making would involve lower perceptuo-motor circuits, and if it involved differential reward then reward circuits would be engaged as well. Cognitive control has little/no role here.
When cognitive control is involved it’s probably doing a lot more than just retrieving causal relations from semantic memory. For instance, perceptual decision making studies show that cognitive control is involved even in deciding what is being perceived when uncertainty arises.
I guess what they’re trying to do is show that cognitive control doesn’t explain all of decision making since there must be a reward component as well. Perhaps this is a good point to make; they just didn’t do it well.
Email 3 (from PL):
Ahhh, ok I think I see now what they’re trying to say. It really just struck me as an excessively divisive statement to start out what appeared to be an interesting article. Can you say “flamebait”? Perhaps they’re trying to be provocative.
– PL & MC
In the dark confines behind our eyes lies flesh full of mysterious patterns, constituting our hopes, desires, knowledge, and everything else fundamental to who we are. Since at least the time of Hippocrates we have wondered about the nature of this flesh and its functions. Finally, after thousands of years of wondering we are now able to observe the mysterious patterns of the living brain, with the help of neuroimaging.
First, electroencephalography (EEG) showed us that these brain patterns have some relation in time to our behaviors. EEG showed us when things happen in the brain. More recent technologies such as functional magnetic resonance imaging (fMRI) then showed us where things happen in the brain.
It has been suggested that true insights into these brain patterns will arise when we can understand the patterns’ complex spatio-temporal nature. Thus, only with sufficient spatial and temporal resolution will we be able to decipher the mechanisms behind the brain patterns, and as a result the mechanisms behind ourselves.
Magnetoencephalography (MEG) may help to provide such insight. This method uses superconducting sensors to detect subtle changes in the magnetic fields surrounding the head. These changes reflect the patterns of neural activity as they occur in the brain. Unlike fMRI (and similar methods), MEG can measure neural activity at a very high temporal resolution (>1 kHz). In this respect it is similar to EEG. However, unlike EEG, MEG patterns are not distorted by the skull and scalp, thus providing an unprecedented level of spatio-temporal resolution for observing the neural activity underlying our selves.
Despite being around for several decades, new advances in the technology are providing unprecedented abilities to observe brain activity. Of course, the method is not perfect by any means. As always, it is a method complimentary to others, and should be used in conjunction with other noninvasive (and the occasionally invasive, where appropriate) neuroimaging methods.
MEG relies on something called a superconducting quantum interference device (SQUID). Many of these SQUIDs are built into a helmet, which is cooled with liquid helium and placed around the head. Extremely small magnetic fields created by neural activity can then be detected with these SQUIDs and recorded to a computer for later analysis.
I recently got back from a trip to Finland, where I learned a great deal about MEG. I’m planning to use the method to observe the flow of information among brain regions during cognitive control tasks involving decision making, learning, and memory. I’m sure news of my work in this area will eventually make it onto this website.
In 1992 Rizzolatti and his colleagues found a special kind of neuron in the premotor cortex of monkeys (Di Pellegrino et al., 1992).
These neurons, which respond to perceiving an action whether it's performed by the observed monkey or a different monkey (or person) it's watching, are called mirror neurons.
Many neuroscientists, such as V. S. Ramachandran, have seized upon mirror neurons as a potential explanatory 'holy grail' of human capabilities such as imitation, empathy, and language. However, to date there are no adequate models explaining exactly how such neurons would provide such amazing capabilities.
Perhaps related to the lack of any clear functional model, mirror neurons have another major problem: Their functional definition is too broad.
Typically, mirror neurons are defined as cells that respond selectively to an action both when the subject performs it and when that subject observes another performing it. A basic assumption is that any such neuron reflects a correspondence between self and other, and that such a correspondence can turn an observation into imitation (or empathy, or language).
However, there are several other reasons a neuron might respond both when an action is performed and observed.
First, there may be an abstract concept (e.g., open hand), which is involved in but not necessary for the action, the observation of the action, or any potential imitation of the action.
Next, there may be a purely sensory representation (e.g., of hands / objects opening) which becomes involved independently of action by an agent.
Finally, a neuron may respond to another subject's action not because it is performing a mapping between self and other but because the other's action is a cue to load up the same action plan. In this case the 'mirror' mapping is performed by another set of neurons, and this neuron is simply reflecting the action plan, regardless of where the idea to load that plan originated. For instance, a tasty piece of food may cause that neuron to fire because the same motor plan is loaded in anticipation of grasping it.
It is clear that mirror neurons, of the type first described by Rizzolati et al., exist (how else could imitation occur?). However, the practical definition for these neurons is too broad.
How might we improve the definition of mirror neurons? Possibly by verifying that a given cell (or population of cells) responds only while observing a given action and while carrying out that same action.
Alternatively, subtractive methods may be more effective at defining mirror neurons than response properties. For instance, removing a mirror neuron population should make imitation less accurate or impossible. Using this kind of method avoids the possibility that a neuron could respond like a mirror neuron but not actually contribute to behavior thought to depend on mirror neurons.
Of course, the best approach would involve both observing response properties and using controlled lesions. Even better would be to do this with human mirror neurons using less invasive techniques (e.g., fMRI, MEG, TMS), since we are ultimately interested in how mirror neurons contribute to higher-level behaviors most developed in homo sapiens, such as imitation, empathy, and language.
Image from The Phineas Gage Fan Club (originally from Ferrari et al. (2003)).
Everyday (spoken) language use involves the production and perception of sounds at a very fast rate. One of my favorite quotes on this subject is in "The Language Instict" by Steven Pinker, on page 157.
"Even with heroic training [on a task], people could not recognize the sounds at a rate faster than good Morse code operators, about three units a second. Real speech, somehow, is perceived an order of magnitude faster: ten to fifteen phonemes per second for casual speech, twenty to thirty per second for the man in the late-night Veg-O-Matic ads […]. Given how the human auditory system works, this is almost unblievable. […P]honemes cannot possibly be consecutive bits of sound."
One thing to point out is that there is a lot of context in language. At a high level, there is context from meaning which is constantly anticipated by the listener: meaning imposes restrictions on the possibilities of the upcoming words. At a lower level there's context from phonetics and co-articulation; for example, it turns out that the "l" in "led" sounds different from the "l" in "let", and this may give the listener a good idea of what's coming next.
Although this notion of context at multiple levels may sound difficult to implement in a computer program, the brain is fundamentally different from a computer. It's important to remember that the brain is massively parallel processing machine, with millions upon millions of signal processing units (neurons).
(I think this concept of context and prediction is lost on more traditional linguists. On the following page of his book, Pinker misrepresents the computer program Dragon NaturallySpeaking by saying that you have to speak haltingly, one-word-at-a-time to get it to recognize words. This is absolutely not the case: the software works by taking context into account, and performs best if you speak at a normal, continuous rate. Reading software instructions often results in better results.)
Given that the brain is a massively parallel compuer, it's really not difficult to imagine that predictions on several different timescales are taken into account during language comprehension. Various experiments from experimental psychology have indicated that this is, in fact, the case.
The study of the brain and how neural systems process language will be fundamental to advancing the field of theoretical linguistics — which thus far seems to be stuck in old ideas from early computer science.
Because language operates on such a rapid timescale, and involves so many different brain areas, there is need to use multiple non-invasive (as well as possibly invasive) recording techniques to get at how language is perceived and produced such as ERP, MEG, fMRI and microelectrodes.
In addition to recording from the brain, real-time measurements of behavior are important in assessing language perception. Two candidate behaviors come to mind: eye movements and changes in hand movements.
Eye movements are a really good candidate for tracking real-time language perception because they are so quick: you can move your eyes before a word has been completely said. Also, there has been some fascinating work done with continuous mouse movements towards various targets to measure participant's on-line predictions of what is about to be said. These kinds of experimental approaches promise to provide insight on how continuous speech signals are perceived.
After a bit of a hiatus, I'm back with the last three installments of "Grand Challenges in Neuroscience".
Topic 4: Time
Cognitive Science programs typically require students to take courses in Linguistics (as well as in the philiosphy of language). Besides the obvious application of studying how the mind creates and uses language, an important reason for taking these courses is to realize the effects of using words to describe the mental, cognitive states of the mind.
In fact — after having taken courses on language and thought, it seems that it would be an interesting coincidence if the words in any particular language did map directly onto mental states or brain areas. (As an example, consider that the amygdala is popularly referred to as the "fear center".)
It seems more likely that mental states are translated on the fly into language, which only approximates their true nature. In this respect, I think it's important to realize that time may be composed of several distinct subcomponents, or time may play very different roles in distinct cognitive processes.
Time. As much as it is important to have an objective measure of time, it is equally important to have an understanding of our subjective experience of time. A number of experimental results have confirmed what has been known to humanity for some time: Time flies while you're having fun, but a watched pot never boils.
Time perception strongly relates cognition, attention and reward. The NSF committee proposed that understanding time is going to be integrative, involving brain regions whose function is still not understood at a "systems" level, such as the cerebellum, basal ganglia, and association cortex.
The NSF committee calls for the develpoment of new paradigms for the study of time. I agree that this is critical. To me, one of the most important issues is the dissociation of reward from time (e.g., "time flies when your having fun"): most tasks involving time perception in both human and non-human primates involved rewarding the participants.
In order to get a clearer read on the neurobiology of time perception and action, we need to observe neural representations that are not colored by the anticipation of reward.
Brain image from http://www.cs.princeton.edu/gfx/proj/sugcon/models/
Clock image from http://elginwatches.org/technical/watch_diagram.html
Hippocampus is involved in feature binding for novel stimuli (McClelland, McNaughton, & O'Reilly – 1995, Knight – 1996, Hasselmo – 2001, Ranganath & D'Esposito – 2001)
It was demonstrated by McClelland et al. that, based on its role in episodic memory encoding, hippocampus can learn fast arbitrary association.
This was in contrast to neocortex, which they showed learns slowly in order to develop better generalizations (knowledge not tied to a single episode). This theory was able to explain why patient H.M. knew (for example) about JFK's assassination even though he lost his hippocampus in the 1950s.
Robert Knight provided evidence for a special place for novelty in hippocampal function by showing a different electrical response to novel stimuli in patients with hippocampal damage.
These two findings together suggested that the hippocampus may be important for binding the features of novel stimuli, even over short periods.
This was finally verified by Hasselmo et al. and Ranganath & D'Esposito in 2001. They used functional MRI to show that a portion of the hippocampal formation is more active during working memory delays when novel stimuli are used.
This suggests that hippocampus is not just important for long term memory. Instead, it is important for short term memory and perhaps novel perceptual binding in general.
Some recent evidence suggests that hippocampus may be important for imagining the future, possibly because binding of novel features is necessary to create a world that does not yet exist (for review see Schacter et al. 2007).
This image is of the other universe; the one outside our heads.
It depicts the “evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years”, as computed by the Millennium Simulation. (Click the image above for a better view.)
The next image, of a neuron, is included for comparison.
It is tempting to wax philosophical on this structure equivalence. How is it that both the external and internal universes can have such similar structure, and at such vastly different physical scales?
If we choose to go philosophical, we may as well ponder something even more fundamental: Why is it that all complex systems seem to have a similar underlying network-like structure?
Topic 3: Spatial Knowledge
Animal studies have shown that the hippocampus contains special cells called "place cells". These place cells are interesting because their activity seems to indicate not what the animal sees, but rather where the animal is in space as it runs around in a box or in a maze. (See the four cells in the image to the right.)
Further, when the animal goes to sleep, those cells tend to reactivate in the same order they did during wakefulness. This apparent retracing of the paths during sleep has been termed "hippocampal replay".
More recently, studies in humans — who have deep microelectrodes implanted to help detect the origin of epileptic seizures — have shown place-responsive cells. Place cells in these studies were found not only in the human hippocampus but also in nearby brain regions.
The computation which converts sequences of visual and other cues into a sense of "place" is a very interesting one that has not yet been fully explained. However, there do exist neural network models of the hippocampus that, when presented with sequences, exhibit place-cell like activity in some neurons.
The notion of place cell might also extend beyond physical space. It has been speculated that computations occur to convert sequences events and situations into a distinct sense of "now". And indeed, damage to the hippocampus has been found not only to impair spatial memory but also "episodic" memory, the psychological term for memory for distinct events.
How can we understand the ways in which we understand space? Understanding spatial knowledge seems more tangible than understanding the previous two topics in this series. It seems that researchers are already using some of the most effective methods to tackle the problem.
First, the use of microelectrodes throughout the brain while human participants play virtual taxi games and perform problem solving tasks promises insight into this question. Second, computational modeling of regions (e.g., the hippocampus) containing place cells should help us understand their properties and how they emerge. Finally, continued animal research and possibly manipulation of place cells in animals to influence decision making (e.g., in a T-maze task) may provide an understanding of how spatial knowledge is used on-line.
Generally, cognitive neuroscience aims to explain how mental processes such as believing, knowing, and inferring arise in the brain and affect behavior. Two behaviors that have important effects on the survival of humans are cooperation and conflict.
According to the NSF committee convened last year, conflict and cooperation is an important focus area for future cognitive neuroscience work. Although research in this area has typically been the domain of psychologists, it seems that the time is ripe to apply findings from neuroscience to ground psychological theories in the underlying biology.
Neuroscience has produced a large amount of information about the brain regions that are relevant to social interactions. For example, the amygdala has been shown to be involved in strong emotional responses. The "mirror" neuron system in the frontal lobe allows us to put ourselves in someone else's shoes by allowing us to understand their actions as though they were our own. Finally, the superior temporal gyrus and orbitofrontal cortex, normally involved in language and reward respectively, have also been shown to be involved in social behaviors.
The committee has left it up to us to come up with a way to study these phenomena! How can we study conflict and cooperation from cognitive neuroscience perspective?
At least two general approaches come to mind. The first is fMRI studies in which social interactions are simulated (or carried out remotely) over a computer link to the experiment participant. A range of studies of this sort have recently begun to appear investigating trust and decision-making in social contexts.
The second general approach that comes to mind is that of using neurocomputational simulations of simple acting organisms with common or differing goals. Over the past few years, researchers have been carrying out studies with multiple interacting "agents" that "learn" through the method of Reinforcement Learning.
Reinforcement Learning is an artificial intelligence algorithm which allows "agents" to develop behaviors through trial-and-error in an attempt to meet some goal which provides reward in the form of positive numbers. Each agent is defined as a small program with state (e.g., location, sensory input) and a memory or "value function" which can keep track of how much numerical reward it expects to obtain by choosing a possible action.
Although normally thought to be of interest only to computer scientists, Reinforcement Learning has recently attracted the attention of cognitive neuroscientists because of emerging evidence that something like it might be used in the brain.
By providing these agents with a goal that can only be achieved through some measure of coorperation or under some pressure, issues of conflict and coorperation can by studied in a perfectly controlled computer simulation environment.
Following up on MC's posts about the significant insights in the history of neuroscience, I'll now take Neurevolution for a short jaunt into neuroscience's potential future.
In light of recent advances in technologies and methodologies applicable to neuroscience research, the National Science Foundation last summer released a document on the "Grand Challenges of Neuroscience". These grand challenges were identified by a committee of leading members of the cognitive neuroscience community.
The document, available at http://www.nsf.gov/sbe/grand_chall.pdf, describes six domains of research the committee deemed to be important for progress in understanding the relationship between mind and brain.
Over the next few posts, I will discuss each of the research domains and explain in layperson's terms why these questions are interesting and worth pursuing. I'll also describe potential experimental approaches to address these questions in a cognitive neuroscience framework.
Topic 1: "Adaptive Plasticity"
One research topic brought up by the committee was that of adaptive plasticity. In this context, plasticity refers to the idea that the connections in the brain, and the behavior governed by the brain, can be changed through experience and learning.
Learning allows us to adapt to new circumstances and environments. Arguably, understanding how we learn and how to improve learning could be one of the greatest contributions of neuroscience.
Although it is widely believed that memory is based on the synaptic changes that occur during long-term potentiation and long-term depression (see our earlier post) this has not been conclusively shown!
What has been shown is that drugs that prevent synaptic changes also prevent learning. However, that finding only demonstrates a correlation between synaptic change and memory formation, not causation. (For example, it is possible that those drugs are interfering with some other process that truly underlies memory.)
The overarching question the committee raises is: What are the rules and principles of neural plasticity that implement [the] diverse forms of memory?
This question aims to quantify the exact relationships between changes at the neuronal level and at the level of behavior. For instance, do rapid changes at the synapse reflect rapid learning? And, how do the physical limitations on the changes at the neuronal level relate to cognitive limitations at the behavioral level?
My personal opinion is that the answers to these questions will be obtained through new experiments that either implant new memories or alter existing ones (e.g., through electrical stimulation protocols).
There is every indication that experimenters will soon be able to select and stimulate particular cells in an awake, behaving animal to alter the strength of the connection between those cells. The experimenters can then test the behavior of the animals to see if their memory for the association that might be represented by that connection has been altered.
This post is the culmination of a month-long chronicling of the major brain computation insights of all time.
Some important insights were certainly left out, so feel free to add comments with your favorites.
Below you will find all 26 insights listed with links to their entries. At the end is the summary of the insights in two (lengthy) sentences.
1) The brain computes the mind (Hippocrates- 460-379 B.C.)
2) Brain signals are electrical (Galvani – 1791, Rolando – 1809)
3) Functions are distributed in the brain (Flourens – 1824, Lashley – 1929)
4) Functions can be localized in the brain (Bouillaud – 1825, Broca – 1861, Fritsch & Hitzig – 1870)
5) Neurons are fundamental units of brain computation (Ramon y Cajal – 1889)
6) Neural networks consist of excitatory and inhibitory neurons connected by synapses (Sherrington – 1906)
7) Brain signals are chemical (Dale – 1914, Loewi – 1921)
8) Reward-based reinforcement learning can explain much of behavior (Skinner – 1938, Thorndike – 1911, Pavlov – 1905)
9) Convergence and divergence between layers of neural units can perform abstract computations (Pitts & McCulloch – 1947)
12) Hippocampus is necessary for episodic memory formation (Milner – 1953)
14) Neocortex is composed of columnar functional units (Mountcastle – 1957, Hubel & Wiesel – 1962)
15) Consciousness depends on cortical communication; the cortical hemispheres are functionally specialized (Sperry & Gazzaniga – 1969)
16) Critical periods of cortical development via competition (Hubel & Wiesel – 1970)
17) Reverbatory activity in lateral prefrontal cortex maintains memories and attention over short periods (Fuster – 1971, Jacobsen – 1936, Goldman-Rakic – 2000)
18) Behavior exists on a continuum between controlled and automatic processing (Schneider & Shiffrin – 1977)
19) Neural networks can self-organize via competition (Grossberg – 1978, Kohonen – 1981)
20) Spike-timing dependent plasticity: Getting the brain from correlation to causation (Levy – 1983, Sakmann – 1994, Bi & Poo – 1998, Dan – 2002)
21) Parallel and distributed processing across many neuron-like units can lead to complex behaviors (Rumelhart & McClelland – 1986, O'Reilly – 1996)
22) Recurrent connectivity in neural networks can elicit learning and reproduction of temporal sequences (Jordan – 1986, Elman – 1990, Schneider – 1991)
23) Motor cortex is organized by movement direction (Schwartz & Georgopoulos – 1986, Schwartz – 2001)
24) Cognitive control processes are distributed within a network of distinct regions (Goldman-Rakic – 1988, Posner – 1990, Wager & Smith – 2004, Dosenbach et al. – 2006, Cole & Schneider – 2007)
25) The dopamine system implements a reward prediction error algorithm (Schultz – 1996, Sutton – 1988)
26) Some complex object categories, such as faces, have dedicated areas of cortex for processing them, but are also represented in a distributed fashion (Kanwisher – 1997, Haxby – 2001)
Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized and/or overlapping distributed networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, dopamine signals for reinforcement learning, and recurrent connectivity for sequential learning.
Early in her career Nancy Kanwisher used functional MRI (fMRI) to seek modules for perceptual and semantic processing. She was fortunate enough to discover what she termed the fusiform face area; an area of extrastriate cortex specialized for face perception.
This finding was immediately controversial. It was soon shown that other object categories also activate this area. Being the adept scientist that she is, Kanwisher showed that the area was nonetheless more active for faces than any other major object category.
Then came a slew of arguments purporting that the face area was in fact an 'expertise area'. This hypothesis states that any visual category with sufficient expertise should activate the fusiform face area.
This argument is based on findings in cognitive psychology showing that many aspects of face perception once thought to be unique are in fact due to expertise (Diamond et al., 1986). Thus, a car can show many of the same perceptual effects as faces for a car expert. The jury is still out on this issue, but it appears that there is in fact a small area in the right fusiform gyrus dedicated to face perception (see Kanwisher's evidence).
James Haxby entered the fray in 2001, showing that even after taking out the face area from his fMRI data he could predict the presence of faces based on distributed and overlapping activity patterns across visual cortex. Thus it was shown that face perception, like visual perception of other kinds of objects, is distributed across visual cortex.
25) The dopamine system implements a reward prediction error algorithm (Schultz – 1996, Sutton – 1988)
It used to be that the main thing anyone "knew" about the dopamine system was that it is important for motor control. Parkinson's disease, which visibly manifests itself as motor tremors, is caused by disruption of the dopamine system (specifically, the substantia nigra), so this was an understandable conclusion.
When Wolfram Schultz began recording from dopamine neurons in mice and monkeys he was having trouble finding correlations with his motor task. Was he doing something wrong? Was he recording from the right cells?
Instead of towing the line of dopamine = motor control he set out to find out what this system really does. It turns out that it is related to reward.
Schultz observed dopamine cell bursting at the onset of unexpected reward. He also observed that this bursting shifts to a cue (e.g., a bell sound) indicating a reward is forthcoming. When the reward cue occurs but no reward follows he saw that the dopamine cells go silent (below resting firing rate).
This pattern is quite interesting computationally. The dopamine signal mimics the error signal in a form of reinforcement learning called temporal difference learning.
This form of learning was originally developed by Sutton. It is a powerful algorithm for learning to predict reward and learn from errors in attaining reward.
Temporal difference learning basically propagates reward prediction back in time as far as possible, thus facilitating the process of attaining reward in the future.
Figure: (Top) No conditioned stimulus cue is given, so the reward is unexpected and there is a big dopamine burst. (Middle) The animal learns to predict the reward based on the cue and the dopamine burst moves to the cue. (Bottom) The reward is predicted, but since no reward occurs there is a depression in dopamine release.
Source: Figure 2 of Schultz, 1999. (News in Physiological Sciences, Vol. 14, No. 6, 249-255, December 1999)
Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, dopamine signals for reinforcement learning, and recurrent connectivity for sequential learning.
[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]