Archive for the ‘Learning & Memory’ Category

The brain’s network switching stations for adaptive behavior

Friday, August 16th, 2013

I’m excited to announce that my latest scientific publication – “Multi-task connectivity reveals flexible hubs for adaptive task control” – was just published in Nature Neuroscience. The paper reports on a project I (along with my co-authors) have been working on for over a year. The goal was to use network science to better understand how human intelligence happens in the brain – specifically, our ability to rapidly adapt to new circumstances, as when learning to perform a task for the first time (e.g., how to use new technology).

The project built on our previous finding (from last year) showing that the amount of connectivity of a well-connected “hub” brain region in prefrontal cortex is linked to human intelligence. That study suggested (indirectly) that there may be hub regions that are flexible – capable of dynamically updating what brain regions they communicate with depending on the current goal.

Typical methods were not capable of more directly testing this hypothesis, however, so we took the latest functional connectivity approaches and pushed the limit, going well beyond the previous paper and what others have done in this area. The key innovation was to look at how functional connectivity changes across dozens of distinct task states (specifically, 64 tasks per participant). This allowed us to look for flexible hubs in the fronto-parietal brain network.

We found that this network contained regions that updated their global pattern of functional connectivity (i.e., inter-regional correlations) depending on which task was being performed.

In other words, the fronto-parietal network changed its brain-wide functional connectivity more than any other major brain network, and this updating appeared to code which task was being performed.

What’s the significance?

These results suggest a potential mechanism for adaptive cognitive abilities in humans:
Prefrontal and parietal cortices form a network with extensive connections projecting to other functionally specialized networks throughout the brain. Incoming instructions activate component representations – coded as neuronal ensembles with unique connectivity patterns – that produce a unique global connectivity pattern throughout the brain. Since these component representations are interchangeable it’s possible to implement combinations of instructions never seen before, allowing for rapid learning of new tasks from instructions.

Important points not mentioned or not emphasized in the journal article:

This study was highly hypothesis-driven, as it tested some predictions of our recent compositional theory of prefrontal cortex function (extended to include parietal cortex as well). That theory was first proposed earlier this year in Cole, Laurent, & Stocco (2013).

Also, as described in our online supplemental FAQ for the paper, we identified ‘adaptive task control’ flexible hubs, but there may be other kinds of flexible hubs in the brain. For instance, there may be flexible hubs for stable task control (maintaining task information via connectivity patterns over extended periods of time, only updating when necessary).

See our online supplemental FAQ for more important points that were not mentioned in the journal article. Additional information is also available from a press release from Washington University.

–MWCole

Grand Challenges of Neuroscience: Day 6

Monday, July 21st, 2008

Topic 6: Causal Understanding


Causal understanding is an important part of human cognition.  How do we understand that a particular event or force has caused another event?  How do realize that inserting coins into a soda machine results in a cool beverage appearing below?  And ultimately, how do we understand people’s reactions to events?

The NSF workshop panel on the Grand Challenges of Mind and Brain highlighted the question of ‘causal understanding’ as their 6th research topic.   (This was the final topic in their report.)

In addition to studying causal understanding, it is probably just as important to study causal misunderstanding: that is, why do individuals infer the wrong causes for events?  Or incorrect results from causes? Studying the errors we make in causal inference and understanding may help us discover the underlying neural mechanisms.  

It probably isn’t too difficult to imagine that progress on causal understanding, and improvements in our ability to be correct about causation, will probably be important for the well-being of humanity.  But what kinds of experiments and methods could be used to human brain mechanisms of  causal understanding?

(more…)

A Brief Introduction to Reinforcement Learning

Monday, June 2nd, 2008

Computational models that are implemented, i.e., written out as equations or software, are an increasingly important tool for the cognitive neuroscientist.  This is because implemented models are, effectively, hypotheses that have been worked out to the point where they make quantitative predictions about behavior and/or neural activity.

In earlier posts, we outlined two computational models of learning hypothesized to occur in various parts of the brain, i.e., Hebbian-like LTP (here and here) and error-correction learning (here and here). The computational model described in this post contains hypotheses about how we learn to make choices based on reward.

The goal of this post is to introduce a third type of learning: Reinforcement Learning (RL).  RL is hypothesized by a number of cognitive neuroscientists to be implemented by the basal ganglia/dopamine system.  It has become somewhat of a hot topic in Cognitive Neuroscience and received a lot of coverage at this past year’s Computational Cognitive Neuroscience Conference. (more…)

Levels of Analysis and Emergence: The Neural Basis of Memory

Friday, May 30th, 2008

A square 'emerges' from its surroundings (at least in our visual system)Cognitive neuroscience constantly works to find the appropriate level of description (or, in the case of computational modeling, implementation) for the topic being studied.  The goal of this post is to elaborate on this point a bit and then illustrate it with an interesting recent example from neurophysiology.

As neuroscientists, we can often  choose to talk about the brain at any of a number of levels: atoms/molecules, ion channels and other proteins, cell compartments, neurons, networks, columns, modules, systems, dynamic equations, and algorithms.

However, a description at too low a level might be too detailed, causing one to lose the forest for the trees.  Alternatively, a description at too high a level might miss valuable information and is less likely to generalize to different situations.

For example, one might theorize that cars work by propelling gases from their exhaust pipes.  Although this might be consistent with all of the observed data, by looking “under the hood” one would find evidence that this model of a car’s function is incorrect.

(more…)

Joaquin Fuster on Cortical Dynamics

Saturday, April 5th, 2008

I recently watched this talk (below) by Joaquin Fuster. His theories provide a good integration of cortical functions and distributed processing in working and long-term memory. He also has some cool videos of likely network interactions across cortex (in real time) in his talk.

Here is a diagram of Dr. Fuster’s view of cortical hierarchies:

Joaquin Fuster’s talk:

Link to Joaquin Fuster’s talk [Google Video]

Here is an excerpt from Dr. Fuster’s amazing biography:
(more…)

Combining Simple Recurrent Networks and Eye-Movements to study Language Processing

Saturday, April 5th, 2008

BBS image of GLENMORE model

Modern technologies allow eye movements to be used as a tool for studying language processing during tasks such as natural reading. Saccadic eye movements during reading turn out to be highly sensitive to a number of linguistic variables. A number of computational models of eye movement control have been developed to explain how these variables affect eye movements. Although these models have focused on relatively low-level cognitive, perceptual and motor variables, there has been a concerted effort in the past few years (spurred by psycholinguists) to extend these computational models to syntactic processing.

During a modeling symposium at ECEM2007 (the 14th European Conference on Eye Movements), Dr. Ronan Reilly presented a first attempt to take syntax into account in his eye-movement control model (GLENMORE; Reilly & Radach, Cognitive Systems Research, 2006). (more…)

The Will to be Free, Part II

Tuesday, November 6th, 2007

 Several months ago I posted The Will to be Free, Part I. In that post I explained that memory is the key to free will. However, this insight isn’t quite satisfactory. We need three additional things to complete the picture: the ability to choose based on predictions, internal desires, and self-awareness. (A quick disclaimer: These ideas are all extremely speculative. I’ll probably test most of them at some point, but right now I’m just putting them out there to hopefully allow for refinement of these hypotheses.) First, the ability to choose based on predictions. As mentioned last time, free will comes down to decision making. Specifically it comes down to our ability to make a decision based on internal sources (or at least condoned by them), rather than external coercive forces. If we cannot predict the outcome of our decision with any certainty, then decision making is pointless. For instance, if no matter what I choose to order at dinner a random dish is served then I had no freedom to choose in the first place. Thus, our ability to predict is necessary for free will. What are these “internal sources” involved in decision making that I mentioned earlier? They are the second new idea needed to complete our picture of free will: desires. (more…)

History’s Top Brain Computation Insights: Hippocampus binds features

Monday, June 18th, 2007

Hippocampus is involved in feature binding for novel stimuli (McClelland, McNaughton, & O'Reilly – 1995, Knight – 1996, Hasselmo – 2001, Ranganath & D'Esposito – 2001)

It was demonstrated by McClelland et al. that, based on its role in episodic memory encoding, hippocampus can learn fast arbitrary association.

This was in contrast to neocortex, which they showed learns slowly in order to develop better generalizations (knowledge not tied to a single episode). This theory was able to explain why patient H.M. knew (for example) about JFK's assassination even though he lost his hippocampus in the 1950s.

Robert Knight provided evidence for a special place for novelty in hippocampal function by showing a different electrical response to novel stimuli in patients with hippocampal damage.

These two findings together suggested that the hippocampus may be important for binding the features of novel stimuli, even over short periods.

This was finally verified by Hasselmo et al. and Ranganath & D'Esposito in 2001. They used functional MRI to show that a portion of the hippocampal formation is more active during working memory delays when novel stimuli are used.

This suggests that hippocampus is not just important for long term memory. Instead, it is important for short term memory and perhaps novel perceptual binding in general.

Some recent evidence suggests that hippocampus may be important for imagining the future, possibly because binding of novel features is necessary to create a world that does not yet exist (for review see Schacter et al. 2007).

-MC

Grand Challenges of Neuroscience: Day 1

Monday, April 30th, 2007

Following up on MC's posts about the significant insights in the history of neuroscience, I'll now take Neurevolution for a short jaunt into neuroscience's potential future.

In light of recent advances in technologies and methodologies applicable to neuroscience research, the National Science Foundation last summer released a document on the "Grand Challenges of Neuroscience".  These grand challenges were identified by a committee of leading members of the cognitive neuroscience community.

The document, available at http://www.nsf.gov/sbe/grand_chall.pdf, describes six domains of research the committee deemed to be important for progress in understanding the relationship between mind and brain.

Over the next few posts, I will discuss each of the research domains and explain in layperson's terms why these questions are interesting and worth pursuing.  I'll also describe potential experimental approaches to address these questions in a cognitive neuroscience framework.

Topic 1:  "Adaptive Plasticity"

One research topic brought up by the committee was that of adaptive plasticity.  In this context, plasticity refers to the idea that the connections in the brain, and the behavior governed by the brain, can be changed through experience and learning.  

Learning allows us to adapt to new circumstances and environments.  Arguably, understanding how we learn and how to improve learning could be one of the greatest contributions of neuroscience.

Although it is widely believed that memory is based on the synaptic changes that occur during long-term potentiation and long-term depression (see our earlier post) this has not been conclusively shown!

What has been shown is that drugs that prevent synaptic changes also prevent learning.  However, that finding only demonstrates a correlation between synaptic change and memory formation,  not causation. (For example, it is possible that those drugs are interfering with some other process that truly underlies memory.)

The overarching question the committee raises is: What are the rules and principles of neural plasticity that implement [the] diverse forms of memory?

This question aims to quantify the exact relationships between changes at the neuronal level and at the level of behavior.  For instance, do rapid changes at the synapse reflect rapid learning?  And, how do the physical limitations on the changes at the neuronal level relate to cognitive limitations at the behavioral level?

Experiments?
My personal opinion is that the answers to these questions will be obtained through new experiments that either implant new memories or alter existing ones (e.g., through electrical stimulation protocols). 

There is every indication that experimenters will soon be able to select and stimulate particular cells in an awake, behaving animal to alter the strength of the connection between those cells.  The experimenters can then test the behavior of the animals to see if their memory for the association that might be represented by that connection has been altered.

-PL 

History’s Top Brain Computation Insights: Day 25

Thursday, April 26th, 2007

Dopamine signal related to reward and reward prediction (Schultz, 1999)25) The dopamine system implements a reward prediction error algorithm (Schultz – 1996, Sutton – 1988)

It used to be that the main thing anyone "knew" about the dopamine system was that it is important for motor control.  Parkinson's disease, which visibly manifests itself as motor tremors, is caused by disruption of the dopamine system (specifically, the substantia nigra), so this was an understandable conclusion.

When Wolfram Schultz began recording from dopamine neurons in mice and monkeys he was having trouble finding correlations with his motor task. Was he doing something wrong? Was he recording from the right cells?

Instead of towing the line of dopamine = motor control he set out to find out what this system really does. It turns out that it is related to reward.

Schultz observed dopamine cell bursting at the onset of unexpected reward. He also observed that this bursting shifts to a cue (e.g., a bell sound) indicating a reward is forthcoming. When the reward cue occurs but no reward follows he saw that the dopamine cells go silent (below resting firing rate).

This pattern is quite interesting computationally. The dopamine signal mimics the error signal in a form of reinforcement learning called temporal difference learning.

This form of learning was originally developed by Sutton. It is a powerful algorithm for learning to predict reward and learn from errors in attaining reward.

Temporal difference learning basically propagates reward prediction back in time as far as possible, thus facilitating the process of attaining reward in the future.

Figure: (Top) No conditioned stimulus cue is given, so the reward is unexpected and there is a big dopamine burst. (Middle) The animal learns to predict the reward based on the cue and the dopamine burst moves to the cue. (Bottom) The reward is predicted, but since no reward occurs there is a depression in dopamine release.
Source: Figure 2 of Schultz, 1999. (News in Physiological Sciences, Vol. 14, No. 6, 249-255, December 1999)

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions forming specialized networks involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, dopamine signals for reinforcement learning, and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 22

Monday, April 23rd, 2007

22) Recurrent connectivity in neural networks can elicit learning and reproduction of temporal sequences (Jordan – 1986, Elman – 1990, Schneider – 1991)

Powerful learning algorithms such as Hebbian learning, self-organizing maps, and backpropagation of error illustrated how categorization and stimulus-response mapping might be learned in the brain. However, it remained unclear how sequences and timing discrimination might be learned.

In 1986 Michael Jordan (the computer scientist, not the basketball player) developed a network of neuron-like units that fed back upon itself. Jeff Elman expanded on this, showing how these recurrent networks can learn to recognize sequences of ordered stimuli.

Elman applied his recurrent networks to the problem of language perception. He concluded that language relies heavily on recurrent connectivity in cortex; an unproven but well-accepted statement among many scientists today.

The year after Elman's demonstration of sequence learning with language, Walter Schneider (Schneider & Oliver, 1991) used a recurrent network to implement what he termed a 'goal processor'. This network can learn arbitrary task sequences, effectively expanding recurrent networks beyond language learning to learning new tasks of any type.

See this article for a review of a model implementing a goal processor.

The goal processor has been likened to a part of neocortex (dorsolateral prefrontal cortex) shown to be involved in maintaining goal information in working memory. Also, this maintenance is believed to occur via local (and/or via long-range fronto-parietal connections) recurrent connectivity.

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 21

Sunday, April 22nd, 2007

21) Parallel and distributed processing across many neuron-like units can lead to complex behaviors (Rumelhart & McClelland – 1986, O'Reilly – 1996)

Pitts & McCulloch provided amazing insight into how brain computations take place. However, their two-layer perceptrons were limited. For instance, they could not implement the logic gate XOR (i.e., 'one but not both'). An extra layer was added to solve this problem, but it became clear that the Pitts & McCulloch perceptrons could not learn anything requiring more than two layers.

Rumelhart solved this problem with two insights.

First, he implemented a non-linear sigmoid function (approximating a neuronal threshold), which turned out to be essential for the next insight.

Second, he developed an algorithm called 'backpropagation of error', which allows the output layer to propagate its error back across all the layers such that the error can be corrected in a distributed fashion. See P.L.'s previous post on the topic for further details.

Rumelhart & McClelland used this new learning algorithm to explore how cognition might be implemented in a parallel and distributed fashion in neuron-like units. Many of their insights are documented in the two-volume PDP series.

Unfortunately, the backpropagation of error algorithm is not very biologically plausible.  Signals have never been shown to flow backward across synapses in the manner necessary for this algorithm to be implemented in actual neural tissue.

However, O'Reilly (whose thesis advisor was McClelland) expanded on Hinton & McClelland (1988) to implement a biologically plausible version of backpropagation of error. This is called the generalized recirculation algorithm, and is based on the contrastive-Hebbian learning algorithm.

O'Reilly and McClelland view the backpropagating error signal as the difference between the expected outcome and the perceived outcome. Under this interpretation these algorithms are quite general, applying to perception as well as action.

The backprop and generalized recirculation algorithms are described in a clear and detailed manner in  Computational Explorations in Cognitive Neuroscience by O'Reilly & Munakata. These algorithms can be explored by downloading the simulations accompanying the book (available for free).

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional communication for functional integration.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]
-MC

History’s Top Brain Computation Insights: Day 20

Saturday, April 21st, 2007

Spike-timing dependent plasticity (Bi & Poo 1998; Markram et al. 1997)20) Spike-timing dependent plasticity: Getting the brain from correlation to causation (Levy – 1983, Sakmann – 1994, Bi & Poo – 1998, Dan – 2002)

Hebb's original proposal was worded as such: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." [emphasis added]

The phrase "takes part in firing" implies causation of B's activity via A's activity, not simply a correlation of the two.

There are several ways to go beyond correlation to infer causation. One method is to observe that one event (e.g., cell A's activity) comes just before the caused event (e.g., cell B's activity).

In 1983 Levy showed with hippocampal slices that electrically stimulating cell A to fire before cell B will cause long-lasting strengthening of the synapse from cell A to cell B.  However, when the opposite occurs, and cell A is made to fire after cell B, there is depotentiation of the same synapse. In other words, timing is essential for synaptic learning. Today, this form of learning is called spike-timing dependent plasticity (STDP).

Using this rule, Levy has created a variety of neural network models aimed at understanding  memory in the brain (e.g., especially in the hippocampus; see this paper for a short review). 

More recently, other researchers including Sakmann, Bi, Poo, and Dan have further characterized this phenomenon. They showed that it occurs in vivo, within a specific time window (~8 msec timing difference is optimal), in neocortex, and (using behavioral evidence) in humans.

Figure caption: A) Figure from Bi & Poo (1998) showing the effects of STDP in potentiation and depotentiation with optimal results ~8-10ms in either direction. B) Figure from Markram et al. (1997) showing the timing of the stimulation relative to the post-synaptic cell's EPSP. C) Another figure from Markram et al. (1997) showing the resulting long-term changes in synaptic efficacy due to the manipulations in figure B.

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by timing-dependent correlated activity. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g.,short-term: prefrontal, long-term: temporal),which depend on inter-regional communication for functional integration.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC & PL

History’s Top Brain Computation Insights: Day 19

Friday, April 20th, 2007

Center-surround organization used in SOMs19) Neural networks can self-organize via competition (Grossberg – 1978, Kohonen – 1981)

Hubel and Wiesel's work with the development  of cortical columns (see previous post) hinted at it, but it wasn't until Grossberg and Kohonen built computational architectures explicitly exploring competition that its importance was made clear.

Grossberg was the first to illustrate the possibility of self-organization via competition. Several years later Kohonen created what is now termed a Kohonen network, or self-organizing map (SOM). This kind of network is composed of layers of neuron-like units connected with local excitation and, just outside that excitation, local inhibition. The above figure illustrates this 'Mexican hat' function in three dimensions, while the figure below represents it in two dimensions along with its inputs.

These networks, which implement Hebbian learning, will spontaneously organize into topographic maps.

For instance, line orientations that are similar to each other will tend to be represented by nearby neural units, while less similar line orientations will tend to be represented by more distant neural units. This occurs even when the map starts out with random synaptic weights. Also, this spontaneous organization will occur for even very complex stimuli (e.g., faces) as long as there are spatio-temporal regularities in the inputs.

Another interesting feature of Kohonen networks is that the more frequent input patterns are represented by larger areas in the map. This is consistent with findings in cortex, where more frequently used representations have larger cortical areas dedicated to them.

There are several computational advantages to having local competition between similar stimuli, which SOMs can provide.

One such advantage is that local competition can increase specificity of the representation by ruling out close alternatives via lateral inhibition. Using this computational trick, the retina can discern visual details better at the edges of objects (due to contrast enhancement).

Another computational advantage is enhancement of what's behaviorally important relative to what isn't. This works on a short time-scale with attention (what's not important is inhibited), and on a longer time-scale with increases in representational space in the map with repeated use, which increases representational resolution (e.g., the hand representation in the somatosensory homonculus).

You can explore SOMs using Topographica, a computational modeling environment for simulating topographic maps in cortex. Of special interest here is the SOM tutorial available at topographica.org.


Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g.,short-term: prefrontal, long-term: temporal),which depend on inter-regional communication for functional integration.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 18

Thursday, April 19th, 2007

Reaction times for a visual search task illustrating controlled and automatic processing18) Behavior exists on a continuum between controlled and automatic processing (Schneider & Shiffrin – 1977)

During the 1970s those studying the cognitive computations underlying visual search were at an impasse. One group of researchers claimed that visual search was a flat search function (i.e., adding more distracters doesn't increase search time), while another group claimed that the function was linear (i.e., adding more distracters increases search time linearly).

Both groups had solid evidence supporting their view. What were the two groups doing differently that could explain such different results?

As a graduate student working with Shiffrin, Schneider sat the two groups down during a scientific conference to have them figure out why their results differed so much. Needless to say, little was accomplished as both sides talked past one another.

Several years later Schneider & Shiffrin came to the realization that the two groups were practicing their subjects differently. The group with the flat search function allowed their subjects to practice the search task many times before collecting data. In contrast, the group with the linear search function began collecting data as soon as their subjects could perform the task.

This realization lead Schneider & Shiffrin to posit a distinction between automatic (flat search function) and controlled (linear search function) processing. In a landmark set of papers they clearly demonstrated this dual process distinction along with the boundary conditions of controlled and automatic task performance. (more…)