Archive for the ‘Theoretical’ Category

The immune system of the mind is in frontoparietal cortex

Friday, July 25th, 2014

The frontoparietal control system is to the mind what the immune system is to the body. It may oversimplify the situation, but we’re finding it’s a useful metaphor nonetheless. Indeed, we’ve just published a new theory paper explaining that there is already an avalanche of evidence supporting this metaphor. Even though much work is left to fully establish the theory, we’re very excited about it as it appears it can explain much that is mysterious about mental illnesses. Most exciting is that the theory may unify understanding and treatment of all mental illnesses simultaneously.

Of course mental illnesses are very complex and problems in most parts of the brain can contribute. However, recent findings suggest a particular brain network may be especially important for predicting and curing mental disease: the frontoparietal control network (see yellow areas in figure). We and others have found that this network is not only highly connected to many other parts of the brain (i.e. its regions are hubs), but it also shifts that connectivity dynamically to specify the current task at hand. This means that any particular goal you are focusing on – such as solving a puzzle or finding food or cheering yourself up when you feel sad – will involve dynamic network interactions with this network (to maintain and implement that goal).

Applying this basic understanding of goal-directed thoughts and actions to mental illness, we realized deficits in this brain network may be critical for explaining many of the cognitive symptoms – such as the inability to concentrate on the task at hand – experienced across a wide variety of mental diseases. Further, we realized that many of the emotional symptoms of mental disease are indirectly affected by this network, since emotional regulation (e.g., reducing phobia-related anxiety) involves brain interactions with this network. This suggests this network may regulate symptoms and promote mental health generally, much like the body’s immune system regulates pathogens to promote physical health.

Another way to think of this is in terms of an interaction between a regulator like a thermostat and a thing to be regulated like the temperature in a room. Similar to the regulation of temperature, the frontoparietal system sets a goal to a range of distributed brain activity patterns (like setting the goal temperature on the thermostat), and the system searches for activity patterns that will make the dysfunctional brain activity patterns shift toward that goal.

As covered in our theory paper, it is well established that the frontoparietal system has all the key properties of a regulator: it maintains goal information, it has access to many other parts of the brain, and it affects distal parts of the brain according to the maintained goal. Further, there is evidence that things like emotional regulation during cognitive behavioral therapy increase activity in the frontoparietal system, suggesting this brain system is working harder when cognitive strategies are used to facilitate mental health.

Perhaps the most exciting prediction of this theory is that enhancing the frontoparietal system may reduce all symptoms of all mental diseases using a single treatment. This is because the frontoparietal system is domain general, meaning it directs goal-directed processes across all aspects of the mind and therefore all possible mental symptoms. In practice there will certainly be exceptions to this, yet simultaneous progress on reducing even just 50% of symptoms would be a tremendous advance.

How might we enhance the frontoparietal system? Perhaps using drugs that differentially influence this system (e.g., dopamine agonists) or direct stimulation of the system (e.g., using transcranial magnetic or current stimulation). Since the frontoparietal system can be reconfigured using verbal instructions, however, building on carefully structured talk therapies may be an especially specific and effective way. In particular, the frontoparietal system is known to implement rapid instructed task learning (RITL) – a way for the brain to implement novel behaviors based on instructions. Ultimately, this theory suggests the proper combination of frontoparietal system enhancement through direct influence (drugs and/or stimulation), talk therapy, and symptom-specific interventions will allow us to make major progress toward curing a wide variety of mental diseases.

MWCole

The evolutionary importance of rapid instructed task learning (RITL)

Sunday, January 23rd, 2011

We are rarely alone when learning something for the first time. We are social creatures, and whether it’s a new technology or an ancient tradition, we typically benefit from instruction when learning new tasks. This form of learning–in which a task is rapidly (within seconds) learned from instruction–can be referred to as rapid instructed task learning (RITL; pronounced “rittle”). Despite the fundamental role this kind of learning plays in our lives, it has been largely ignored by researchers until recently.

My Ph.D. dissertation investigated the evolutionary and neuroscientific basis of RITL.

RITL almost certainly played a tremendous role in shaping human evolution. The selective advantages of RITL for our species are clear: having RITL abilities allows us to partake in a giant web of knowledge shared with anyone willing to instruct us. We might have received instructions to avoid a dangerous animal we have never seen before (e.g., a large cat with a big mane), or instructions on how to make a spear and kill a lion with it. The possible scenarios in which RITL would have helped increase our chances of survival are virtually endless.

There are two basic forms of RITL. (more…)

Grand Challenges of Neuroscience: Day 6

Monday, July 21st, 2008

Topic 6: Causal Understanding


Causal understanding is an important part of human cognition.  How do we understand that a particular event or force has caused another event?  How do realize that inserting coins into a soda machine results in a cool beverage appearing below?  And ultimately, how do we understand people’s reactions to events?

The NSF workshop panel on the Grand Challenges of Mind and Brain highlighted the question of ‘causal understanding’ as their 6th research topic.   (This was the final topic in their report.)

In addition to studying causal understanding, it is probably just as important to study causal misunderstanding: that is, why do individuals infer the wrong causes for events?  Or incorrect results from causes? Studying the errors we make in causal inference and understanding may help us discover the underlying neural mechanisms.  

It probably isn’t too difficult to imagine that progress on causal understanding, and improvements in our ability to be correct about causation, will probably be important for the well-being of humanity.  But what kinds of experiments and methods could be used to human brain mechanisms of  causal understanding?

(more…)

A Brief Introduction to Reinforcement Learning

Monday, June 2nd, 2008

Computational models that are implemented, i.e., written out as equations or software, are an increasingly important tool for the cognitive neuroscientist.  This is because implemented models are, effectively, hypotheses that have been worked out to the point where they make quantitative predictions about behavior and/or neural activity.

In earlier posts, we outlined two computational models of learning hypothesized to occur in various parts of the brain, i.e., Hebbian-like LTP (here and here) and error-correction learning (here and here). The computational model described in this post contains hypotheses about how we learn to make choices based on reward.

The goal of this post is to introduce a third type of learning: Reinforcement Learning (RL).  RL is hypothesized by a number of cognitive neuroscientists to be implemented by the basal ganglia/dopamine system.  It has become somewhat of a hot topic in Cognitive Neuroscience and received a lot of coverage at this past year’s Computational Cognitive Neuroscience Conference. (more…)

Two Universes, Same Structure

Tuesday, June 5th, 2007

Image of galaxiesThis image is not of a neuron.

This image is of the other universe; the one outside our heads.

It depicts the “evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years”, as computed by the Millennium Simulation. (Click the image above for a better view.)

The next image, of a neuron, is included for comparison.

Image of a neuron

It is tempting to wax philosophical on this structure equivalence. How is it that both the external and internal universes can have such similar structure, and at such vastly different physical scales?

If we choose to go philosophical, we may as well ponder something even more fundamental: Why is it that all complex systems seem to have a similar underlying network-like structure?

(more…)

Grand Challenges of Neuroscience: Day 3

Sunday, May 13th, 2007

Topic 3: Spatial Knowledgeskaggs96figure3.png

Animal studies have shown that the hippocampus contains special cells called "place cells".  These place cells are interesting because their activity seems to indicate not what the animal sees, but rather where the animal is in space as it runs around in a box or in a maze. (See the four cells in the image to the right.)

Further, when the animal goes to sleep, those cells tend to reactivate in the same order they did during wakefulness.  This apparent retracing of the paths during sleep has been termed "hippocampal replay".

More recently, studies in humans — who have deep microelectrodes implanted to help detect the origin of epileptic seizures — have shown place-responsive cells.  Place cells in these studies were found not only in the human hippocampus but also in nearby brain regions.

The computation which converts sequences of visual and other cues into a sense of "place" is a very interesting one that has not yet been fully explained.  However, there do exist neural network models of the hippocampus that, when presented with sequences, exhibit place-cell like activity in some neurons.

The notion of place cell might also extend beyond physical space.  It has been speculated that computations occur to convert sequences events and situations into a distinct sense of "now".  And indeed, damage to the hippocampus has been found not only to impair spatial memory but also "episodic" memory, the psychological term for memory for distinct events.

Experiments? 

How can we understand the ways in which we understand space? Understanding spatial knowledge seems more tangible than understanding the previous two topics in this series. It seems that researchers are already using some of the most effective methods to tackle the problem.

First, the use of microelectrodes throughout the brain while human participants play virtual taxi games and perform problem solving tasks promises insight into this question.  Second, computational modeling of regions (e.g., the hippocampus) containing place cells should help us understand their properties and how they emerge.  Finally, continued animal research and possibly manipulation of place cells in animals to influence decision making (e.g., in a T-maze task) may provide an understanding of how spatial knowledge is used on-line. 

-PL 

History’s Top Brain Computation Insights: Day 23

Tuesday, April 24th, 2007

23) Motor cortex is organized by movement direction (Schwartz  & Georgopoulos – 1986, Schwartz – 2001)

Penfield had shown that motor cortex is organized in a somatotopic map. However, it was unclear how individual neurons are organized. What does each neuron's activity represent? The final location of a movement, or the direction of that movement?

Shwartz & Georgopoulos found that movement direction correlated best with neural activity. Further, they discovered that the neurons use a population code to specify a given movement. Thus, as illustrated above, a single neuron responds to a variety of movement direction but has one preferred direction (PD).

The preferred direction code across a large population of neurons thus sums to specify each movement.

Schwartz has since demonstrated how these population vectors can be interpreted by a computer to control a prosthetic arm. He has used this to imbue monkeys with Jedi powers; able to move a prosthetic limb in another room (or attached) with only a thought. Using this technology the Schwartz team hopes to allow amputee humans to control artificial limbs as they once did their own.

A general computational insight one might take from the Schwartz & Georgopoulos work is the possibility of population coding across cortex. It appears that all perception, semantics, and action may be coded as distributed population vectors.

Representational specificity within these vectors likely arises from conjunctions of receptive fields, and are dominated by those receptive fields most specific to each representation. 

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration, population vector summation for representational specificity, and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 22

Monday, April 23rd, 2007

22) Recurrent connectivity in neural networks can elicit learning and reproduction of temporal sequences (Jordan – 1986, Elman – 1990, Schneider – 1991)

Powerful learning algorithms such as Hebbian learning, self-organizing maps, and backpropagation of error illustrated how categorization and stimulus-response mapping might be learned in the brain. However, it remained unclear how sequences and timing discrimination might be learned.

In 1986 Michael Jordan (the computer scientist, not the basketball player) developed a network of neuron-like units that fed back upon itself. Jeff Elman expanded on this, showing how these recurrent networks can learn to recognize sequences of ordered stimuli.

Elman applied his recurrent networks to the problem of language perception. He concluded that language relies heavily on recurrent connectivity in cortex; an unproven but well-accepted statement among many scientists today.

The year after Elman's demonstration of sequence learning with language, Walter Schneider (Schneider & Oliver, 1991) used a recurrent network to implement what he termed a 'goal processor'. This network can learn arbitrary task sequences, effectively expanding recurrent networks beyond language learning to learning new tasks of any type.

See this article for a review of a model implementing a goal processor.

The goal processor has been likened to a part of neocortex (dorsolateral prefrontal cortex) shown to be involved in maintaining goal information in working memory. Also, this maintenance is believed to occur via local (and/or via long-range fronto-parietal connections) recurrent connectivity.

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional connectivity for functional integration and recurrent connectivity for sequential learning.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

History’s Top Brain Computation Insights: Day 21

Sunday, April 22nd, 2007

21) Parallel and distributed processing across many neuron-like units can lead to complex behaviors (Rumelhart & McClelland – 1986, O'Reilly – 1996)

Pitts & McCulloch provided amazing insight into how brain computations take place. However, their two-layer perceptrons were limited. For instance, they could not implement the logic gate XOR (i.e., 'one but not both'). An extra layer was added to solve this problem, but it became clear that the Pitts & McCulloch perceptrons could not learn anything requiring more than two layers.

Rumelhart solved this problem with two insights.

First, he implemented a non-linear sigmoid function (approximating a neuronal threshold), which turned out to be essential for the next insight.

Second, he developed an algorithm called 'backpropagation of error', which allows the output layer to propagate its error back across all the layers such that the error can be corrected in a distributed fashion. See P.L.'s previous post on the topic for further details.

Rumelhart & McClelland used this new learning algorithm to explore how cognition might be implemented in a parallel and distributed fashion in neuron-like units. Many of their insights are documented in the two-volume PDP series.

Unfortunately, the backpropagation of error algorithm is not very biologically plausible.  Signals have never been shown to flow backward across synapses in the manner necessary for this algorithm to be implemented in actual neural tissue.

However, O'Reilly (whose thesis advisor was McClelland) expanded on Hinton & McClelland (1988) to implement a biologically plausible version of backpropagation of error. This is called the generalized recirculation algorithm, and is based on the contrastive-Hebbian learning algorithm.

O'Reilly and McClelland view the backpropagating error signal as the difference between the expected outcome and the perceived outcome. Under this interpretation these algorithms are quite general, applying to perception as well as action.

The backprop and generalized recirculation algorithms are described in a clear and detailed manner in  Computational Explorations in Cognitive Neuroscience by O'Reilly & Munakata. These algorithms can be explored by downloading the simulations accompanying the book (available for free).

Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections altered by timing-dependent correlated activity often driven by expectation errors. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g., short-term: prefrontal, long-term: temporal), which depend on inter-regional communication for functional integration.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]
-MC

History’s Top Brain Computation Insights: Day 19

Friday, April 20th, 2007

Center-surround organization used in SOMs19) Neural networks can self-organize via competition (Grossberg – 1978, Kohonen – 1981)

Hubel and Wiesel's work with the development  of cortical columns (see previous post) hinted at it, but it wasn't until Grossberg and Kohonen built computational architectures explicitly exploring competition that its importance was made clear.

Grossberg was the first to illustrate the possibility of self-organization via competition. Several years later Kohonen created what is now termed a Kohonen network, or self-organizing map (SOM). This kind of network is composed of layers of neuron-like units connected with local excitation and, just outside that excitation, local inhibition. The above figure illustrates this 'Mexican hat' function in three dimensions, while the figure below represents it in two dimensions along with its inputs.

These networks, which implement Hebbian learning, will spontaneously organize into topographic maps.

For instance, line orientations that are similar to each other will tend to be represented by nearby neural units, while less similar line orientations will tend to be represented by more distant neural units. This occurs even when the map starts out with random synaptic weights. Also, this spontaneous organization will occur for even very complex stimuli (e.g., faces) as long as there are spatio-temporal regularities in the inputs.

Another interesting feature of Kohonen networks is that the more frequent input patterns are represented by larger areas in the map. This is consistent with findings in cortex, where more frequently used representations have larger cortical areas dedicated to them.

There are several computational advantages to having local competition between similar stimuli, which SOMs can provide.

One such advantage is that local competition can increase specificity of the representation by ruling out close alternatives via lateral inhibition. Using this computational trick, the retina can discern visual details better at the edges of objects (due to contrast enhancement).

Another computational advantage is enhancement of what's behaviorally important relative to what isn't. This works on a short time-scale with attention (what's not important is inhibited), and on a longer time-scale with increases in representational space in the map with repeated use, which increases representational resolution (e.g., the hand representation in the somatosensory homonculus).

You can explore SOMs using Topographica, a computational modeling environment for simulating topographic maps in cortex. Of special interest here is the SOM tutorial available at topographica.org.


Implication: The mind, largely governed by reward-seeking behavior on a continuum between controlled and automatic processing, is implemented in an electro-chemical organ with distributed and modular function consisting of excitatory and inhibitory neurons communicating via ion-induced action potentials over convergent and divergent synaptic connections strengthened by correlated activity. The cortex, a part of that organ organized via local competition and composed of functional column units whose spatial dedication determines representational resolution, is composed of many specialized regions involved in perception (e.g., touch: parietal, vision: occipital), action (e.g., frontal), and memory (e.g.,short-term: prefrontal, long-term: temporal),which depend on inter-regional communication for functional integration.

[This post is part of a series chronicling history's top brain computation insights (see the first of the series for a detailed description). See the history category archive to see all of the entries thus far.]

-MC

A Popular but Problematic Learning Rule: “Backpropogration of Error”

Thursday, April 5th, 2007

elman_treeBackpropogation of Error (or "backprop") is the most commonly-used neural network training algorithm.  Although fundamentally different from the less common Hebbian-like mechanism mentioned in my last post , it similarly specifies how the weights between the units in a network should be changed in response to various patterns of activity.   Since backprop is so popular in neural network modeling work, we thought it would be important to bring it up and discuss its relevance to cognitive  and computational neuroscience.  In this entry, I provide an overview of backprop and discuss what I think is the central problem with the algorithm from the perspective of  neuroscience.

Backprop is most easily understood as an algorithm which allows a network to learn to map an input pattern to an output pattern.  The two patterns are represented on two different "layers" of units, and there are connection weights that allow activity to spread from the "input layer" to the "output layer."  This activity may spread through intervening units, in which case the network can be said to have a multi-layered architecture.  The intervening layers are typically called "hidden layers".

(more…)

Human Versus Non-Human Neuroscience

Saturday, March 24th, 2007

Most neuroscientists don't use human subjects, and many tend to forget this important point: 
All neuroscience with non-human subjects is theoretical.

If the brain of a mouse is understood in exquisite detail, it is only relevant (outside veterinary medicine) in so far as it is relevant to human brains.

Similarly, if a computational model can illustrate an algorithm for storing knowledge in distributed units, it is only as relevant as it is similar to how humans store knowledge.

It follows from this point that there is a certain amount of uncertainty involved in any non-human research. An experiment can be brilliantly executed, but does it apply to humans?

Circumventing this uncertainty problem by looking directly at humans, another issue arises:  Only non-invasive techniques can be used with humans, and those techniques tend to involve the most uncertainty.

For instance, fMRI is a non-invasive technique that can be used to measure brain processes in humans. However, it measures the oxygenation levels, which is only indirectly related to neural activity. Thus, unlike with animal models, measures of neuronal activity are surrounded by an extra layer of uncertainty in humans.

So, if you're a neuroscientist you have to "choose your poison": Either deal with the uncertainty of relevance to humans, or deal with the uncertainty of the processes underlying the measurable signals in humans.
(more…)

Neural Network “Learning Rules”

Thursday, March 15th, 2007

picture-2.png

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I'll introduce some notions of how neural networks can learn.  Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability.

Let's begin with a theoretical question that is of general interest to cognition: how can a neural system learn sequences, such as the actions required to reach a goal? 

Consider a neuromodeler who hypothesizes that a particular kind of neural network can learn sequences. He might start his modeling study by "training" the network on a sequence. To do this, he stimulates (activates) some of its neurons in a particular order, representing objects on the way to the goal. 

After the network has been trained through multiple exposures to the sequence, the modeler can then test his hypothesis by stimulating only the neurons from the beginning of the sequence and observing whether the neurons in the rest sequence activate in order to finish the sequence.

Successful learning in any neural network is dependent on how the connections between the neurons are allowed to change in response to activity. The manner of change is what the majority of researchers call "a learning rule".  However, we will call it a "synaptic modification rule" because although the network learned the sequence, it is not clear that the *connections* between the neurons in the network "learned" anything in particular.

The particular synaptic modification rule selected is an important ingredient in neuromodeling because it may constrain the kinds of information the neural network can learn.

There are many categories of mathematical synaptic modification rule which are used to describe how synaptic strengths should be changed in a neural network.  Some of these categories include: backpropgration of error, correlative Hebbian, and temporally-asymmetric Hebbian.

  • Backpropogation of error states that connection strengths should change throughout the entire network in order to minimize the difference between the actual activity and the "desired" activity at the "output" layer of the network.
  • Correlative Hebbian states that any two interconnected neurons that are active at the same time should strengthen their connections, so that if one of the neurons is activated again in the future the other is more likely to become activated too.
  • Temporally-asymmetric Hebbian is described in more detail in the example below, but essentially emphasizes the importants of causality: if a neuron realiably fires before another, its connection to the other neuron should be strengthened. Otherwise, it should be weakened. 

Why are there so many different rules?  Some synaptic modification rules are selected because they are mathematically convenient.  Others are selected because they are close to currently known biological reality.  Most of the informative neuromodeling is somewhere in between.

An Example

Let's look at a example of a learning rule used in a neural network model that I have worked with: imagine you have a network of interconnected neurons that can either be active or inactive.    If a neuron is active, its value is 1, otherwise its value is 0. (The use of 1 and 0 to represent simulated neuronal activity is only one of the many ways to do so; this approach goes by the name "McCulloch-Pitts").

(more…)

Predicting Intentions: Implications for Free Will

Thursday, March 8th, 2007

News about a neuroimaging group's attempts to predict intentions hit the wire a few days ago. The major theme was how mindreading might be used for unethical purposes.

What about its more profound implications?

If your intentions can be predicted before you've even made a conscious decision, then your will must be determined by brain processes beyond your control. There cannot be complete freedom of will if I can predict your decisions before you do!

Dr. Haynes, the researcher behind this work, spoke at Carnegie-Mellon University last October. He explained that he could use functional MRI to determine what participants were going to decide several seconds before that decision was consciously made. This was a free choice task, in which the instruction was to press a button whenever the participant wanted. In a separate experiment the group could predict if a participant was going to add or subtract two numbers.

In a way, this is not very surprising. In order to make a conscious decision we must be motivated by either external or internal factors. Otherwise our decisions would just be random, or boringly consistent. Decisions in a free choice task are likely driven by a motivation to move (a basic instinct likely originating in the globus pallidus) and to keep responses spaced within a certain time window.

Would we have a coherent will if it couldn't be predicted by brain activity? It seems unlikely, since the conscious will must use information from some source in order to make a reasoned decision. Otherwise we would be incoherent, random beings with no reasoning behind our actions.

In other words, we must be fated to some extent in order to make informed and motivated decisions.

-MC 

Computational models of cognition in neural systems: WHY?

Monday, February 12th, 2007

cajal1.png
In my most recent post I gave an overview of the "simple recurrent network" (SRN), but I'd like to take a step back and talk about neuromodeling in general.  In particular I'd like to talk about why neuromodeling is going to be instrumental in bringing about the cognitive revolution in neuroscience.

A principal goal of cognitive neuroscience should be to explain how cognitive phenomena arise from the underlying neural systems.  How do the neurons and their connections result in interesting patterns of thought?  Or to take a step up, how might columns, or nuclei, interact to result in problem solving skills, thought or consciousness?

If a cognitive neuroscientist believes they know how a neural system gives rise to a behavior, they should be able to construct a model to demonstrate how this is the case.

That, in brief, is the answer.

But what makes a good model?  I'll partially answer this question below, but in future posts I'll bring up specific examples of models, some good, some poor.

First, any "model" is a simplification of the reality.  If the model is too simple, it won't be interesting.  If it's too realistic, it will be too complex to understand.  Thus, a good model is at that sweet spot where it's as simple as possible but no simpler.
Second, a model whose ingredients spell out the result you're looking for won't be interesting.  Instead, the results should emerge from the combination of the model's resources, constraints and experience.

Third, a model with too many "free" parameters is less likely to be interesting.  So an important requirement is that the "constraints" should be realistic, mimicking the constraints of the real system that is being modeled.

A common question I have gotten is:  "Isn't a model just a way to fit inputs to outputs?  Couldn't it just be replaced with a curve fitter or a regression?"  Well, perhaps the answer should be yes IF you consider a human being to just be a curve fitting device. A human obtains inputs and generates outputs.  So if you wish to say that a model is just a curve fitter, I will say that a human is, too.

What's interesting about neural systems, whether real or simulated, is the emergence of complex function from seemingly "simple" parts.

In future posts, I'll talk more about "constraints" by giving concrete examples.  In the meantime, feel free to bring up any questions you have about the computational modeling of cognition.
-PL 

[Image by Santiago Ramon y Cajal, 1914.]