Grand Challenges of Neuroscience: Day 6

Topic 6: Causal Understanding


Causal understanding is an important part of human cognition.  How do we understand that a particular event or force has caused another event?  How do realize that inserting coins into a soda machine results in a cool beverage appearing below?  And ultimately, how do we understand people’s reactions to events?

The NSF workshop panel on the Grand Challenges of Mind and Brain highlighted the question of ‘causal understanding’ as their 6th research topic.   (This was the final topic in their report.)

In addition to studying causal understanding, it is probably just as important to study causal misunderstanding: that is, why do individuals infer the wrong causes for events?  Or incorrect results from causes? Studying the errors we make in causal inference and understanding may help us discover the underlying neural mechanisms.

It probably isn’t too difficult to imagine that progress on causal understanding, and improvements in our ability to be correct about causation, will probably be important for the well-being of humanity.  But what kinds of experiments and methods could be used to human brain mechanisms of  causal understanding?

Read more

A Brief Introduction to Reinforcement Learning

Computational models that are implemented, i.e., written out as equations or software, are an increasingly important tool for the cognitive neuroscientist.  This is because implemented models are, effectively, hypotheses that have been worked out to the point where they make quantitative predictions about behavior and/or neural activity.

In earlier posts, we outlined two computational models of learning hypothesized to occur in various parts of the brain, i.e., Hebbian-like LTP (here and here) and error-correction learning (here and here). The computational model described in this post contains hypotheses about how we learn to make choices based on reward.

The goal of this post is to introduce a third type of learning: Reinforcement Learning (RL).  RL is hypothesized by a number of cognitive neuroscientists to be implemented by the basal ganglia/dopamine system.  It has become somewhat of a hot topic in Cognitive Neuroscience and received a lot of coverage at this past year’s Computational Cognitive Neuroscience Conference.

Read more

Levels of Analysis and Emergence: The Neural Basis of Memory

A square 'emerges' from its surroundings (at least in our visual system)Cognitive neuroscience constantly works to find the appropriate level of description (or, in the case of computational modeling, implementation) for the topic being studied.  The goal of this post is to elaborate on this point a bit and then illustrate it with an interesting recent example from neurophysiology.

As neuroscientists, we can often  choose to talk about the brain at any of a number of levels: atoms/molecules, ion channels and other proteins, cell compartments, neurons, networks, columns, modules, systems, dynamic equations, and algorithms.

However, a description at too low a level might be too detailed, causing one to lose the forest virtual reality headset for the trees.  Alternatively, a description at too high a level might miss valuable information and is less likely to generalize to different situations.

For example, one might theorize that cars work by propelling gases from their exhaust pipes.  Although this might be consistent with all of the observed data, by looking “under the hood” one would find evidence that this model of a car’s function is incorrect.

Read more

Combining Simple Recurrent Networks and Eye-Movements to study Language Processing

BBS image of GLENMORE model

Modern technologies allow eye movements to be used as a tool for studying language processing during tasks such as natural reading. Saccadic eye movements during reading turn out to be highly sensitive to a number of linguistic variables. A number of computational models of eye movement control have been developed to explain how these variables affect eye movements. Although these models have focused on relatively low-level cognitive, perceptual and motor variables, there has been a concerted effort in the past few years (spurred by psycholinguists) to extend these computational models to syntactic processing.

During a modeling symposium at ECEM2007 (the 14th European Conference on Eye Movements), Dr. Ronan Reilly presented a first attempt to take syntax into account in his eye-movement control model (GLENMORE; Reilly & Radach, Cognitive Systems Research, 2006).

Read more

Grand Challenges of Neuroscience: Day 5

Tampa Bay Buccaneers Jerseys Wholesale Topic: Languagequit_stealing.jpg

Everyday (spoken) language use involves the production and perception of sounds at a very fast rate. One of my favorite quotes on this subject is in "The Language Instict" Cheap Jerseys chinaby Steven Pinker, on page 157

"Even with heroic training [on a task], people could not recognize the sounds at a rate faster than good Morse code operators, about three units a second.  Real speech, somehow, is perceived an order of magnitude faster: ten to fifteen phonemes per second for casual speech, twenty to thirty per second for the man in the late-night Veg-O-Matic ads […]. Given how the human auditory system works, this is almost unblievable. […P]honemes cannot possibly be consecutive bits of sound."

One thing to point out is that there is a lot of context in language.  At a high level, there is context from meaning which is constantly anticipated by the listener: meaning imposes restrictions on the possibilities of the upcoming words.  At a lower level there's context from phonetics and co-articulation; for example, it turns out that the "l" in "led" sounds different from the "l" in "let", and this may give the listener a good idea of what's coming next. 

Although this notion of context at multiple levels may sound difficult to implement in a computer program, the brain is fundamentally different from a computer.  It's important to remember that the brain is massively parallel processing machine, with millions upon millions of signal processing units (neurons). 
(I think this concept of context and prediction is lost on more traditional linguists.  On the following page of his book, Pinker misrepresents the computer program Dragon NaturallySpeaking by saying that you have to speak haltingly, one-word-at-a-time to get it to recognize words.  This is absolutely not the case: the software works by taking context into account, and performs best if you speak at a normal, continuous rate.  Reading software instructions often results in better results.)

Given that the brain is a massively parallel compuer, it's really not difficult to imagine that predictions on several different timescales are taken into account during language comprehension.  Various experiments from experimental psychology have indicated that this is, in fact, the case. 

The study of the brain and how neural systems process language will be fundamental to advancing the field of theoretical linguistics — which thus far seems to be stuck in old ideas from early computer science. 

Experiments?

Because language operates on such a rapid timescale, and involves so many different brain areas, there is need to use multiple non-invasive (as well as possibly invasive) recording techniques to get at how language is perceived and produced such as ERP, MEG, fMRI and microelectrodes. 

In addition to recording from the brain, real-time measurements of behavior are important in assessing language perception. Two candidate behaviors come to mind:  eye movements and changes in hand movements. 

Eye movements are a really good candidate for tracking real-time language perception because they are so quick: you can move your eyes before a word has been completely said.  Also, there has been some fascinating work done with continuous mouse movements towards various targets to measure participant's on-line predictions of what is about to be said.  These kinds of experimental approaches promise to provide insight on how continuous speech signals are perceived.

-PL 

Grand Challenges of Neuroscience: Day 4

http://www.CheapMiamiDolphinsJerseys.com After a bit of a hiatus, I'm back with the last three installments of "Grand Challenges in Neuroscience". picture-1.png

Topic 4: Time Cheap Jerseys china

Cognitive Science programs typically require students to take courses in Linguistics (as well as in the philiosphy of language).  Besides the obvious application of studying how the mind creates and uses language, an important reason for taking these courses is to realize the effects of using words to describe the mental, cognitive states of the mind.

In fact — after having taken courses on language and thought, it seems that it would be an interesting coincidence if the words in any particular language did map directly onto mental states or brain areas.  (As an example, consider that the amygdala is popularly referred to as the "fear center".) 

It seems more likely that mental states are translated on the fly into language, which only approximates their true nature.  In this respect, I think it's important to realize that time may be composed of several distinct subcomponents, or time may play very different roles in distinct cognitive processes.

Time. As much as it is important to have an objective measure of time, it is equally important to have an understanding of our subjective experience of time.  A number of experimental results have confirmed what has been known to humanity for some time: Time flies while you're having fun, but a watched pot never boils.   
Time perception strongly relates cognition, attention and reward.  The NSF committee proposed that understanding time is going to be integrative, involving brain regions whose function is still not understood at a "systems" level, such as the cerebellum, basal ganglia, and association cortex.  

Experiments?

The NSF committee calls for the develpoment of new paradigms for the study of time.  I agree that this is critical.  To me, one of the most important issues is the dissociation of reward from time (e.g., "time flies when your having fun"):  most tasks involving time perception in both human and non-human primates involved rewarding the participants. 

In order to get a clearer read on the neurobiology of time perception and action, we need to observe neural representations that are not colored by the anticipation of reward.

-PL 

Brain image from http://www.cs.princeton.edu/gfx/proj/sugcon/models/
Clock image from http://elginwatches.org/technical/watch_diagram.html

Grand Challenges of Neuroscience: Day 3

Cheap washington redskins jerseys Topic 3: Spatial Knowledgeskaggs96figure3.png

Animal studies have shown that the hippocampus contains special cells called "place cells".  These place cells are interesting because their activity seems to indicate not what the animal sees, but rather where the animal is in space as it runs around in a box or in a maze. (See the four cells in the image to the right.)

washingtonredskinsgear.com Further, when the animal goes to sleep, those cells tend to reactivate in the same order they did during wakefulness.  This apparent retracing of the paths during sleep has been termed "hippocampal replay".

More recently, studies in humans — who have deep microelectrodes implanted to help detect the origin of epileptic seizures — have shown place-responsive cells.  Place cells in these studies were found not only in the human hippocampus but also in nearby brain regions.

The computation which converts sequences of visual and other cues into a sense of "place" is a very interesting one that has not yet been fully explained.  However, there do exist neural network models of the hippocampus that, when presented with sequences, exhibit place-cell like activity in some neurons.

The notion of place cell might also extend beyond physical space.  It has been speculated that computations occur to convert sequences events and situations into a distinct sense of "now".  And indeed, damage to the hippocampus has been found not only to impair spatial memory but also "episodic" memory, the psychological term for memory for distinct events.

Experiments? 

How can we understand the ways in which we understand space? Understanding spatial knowledge seems more tangible than understanding the previous two topics in this series. It seems that researchers are already using some of the most effective methods to tackle the problem.

First, the use of microelectrodes throughout the brain while human participants play virtual taxi games and perform problem solving tasks promises insight into this question.  Second, computational modeling of regions (e.g., the hippocampus) containing place cells should help us understand their properties and how they emerge.  Finally, continued animal research and possibly manipulation of place cells in animals to influence decision making (e.g., in a T-maze task) may provide an understanding of how spatial knowledge is used on-line. 

-PL 

Grand Challenges of Neuroscience: Day 2

hotseattleseahawksjerseys swarm-thumb.jpgTopic 2: Conflict and Cooperation

Generally,Wholesale Seattle Seahawks Jerseys cognitive neuroscience aims to explain how mental processes such as believing, knowing, and inferring arise in the brain and affect behavior.  Two behaviors that have important effects on the survival of humans are cooperation and conflict. 

According to the NSF committee convened last year, conflict and cooperation is an important focus area for future cognitive neuroscience work.  Although research in this area has typically been the domain of psychologists, it seems that the time is ripe to apply findings from neuroscience to ground psychological theories in the underlying biology.

Neuroscience has produced a large amount of information about the brain regions that are relevant to social interactions.  For example, the amygdala has been shown to be involved in strong emotional responses.  The "mirror" neuron system in the frontal lobe allows us to put ourselves in someone else's shoes by allowing us to understand their actions as though they were our own.  Finally, the superior temporal gyrus and orbitofrontal cortex, normally involved in language and reward respectively, have also been shown to be involved in social behaviors.

Experiments?

The committee has left it up to us to come up with a way to study these phenomena! How can we study conflict and cooperation from cognitive neuroscience perspective?

At least two general approaches come to mind. The first is fMRI studies in which social interactions are simulated (or carried out remotely) over a computer link to the experiment participant.  A range of studies of this sort have recently begun to appear investigating trust and decision-making in social contexts.

The second general approach that comes to mind is that of  using neurocomputational simulations of simple acting organisms with common or differing goals.  Over the past few years, researchers have been carrying out studies with multiple interacting "agents" that "learn" through the method of Reinforcement Learning. 

Reinforcement Learning is an artificial intelligence algorithm which allows "agents" to develop behaviors through trial-and-error in an attempt to meet some goal which provides reward in the form of positive numbers.  Each agent is defined as a small program with state (e.g., location, sensory input) and a memory or "value function" which can keep track  of how much numerical reward it expects to obtain by choosing a possible action.

Although normally thought to be of interest only to computer scientists, Reinforcement Learning has recently attracted the attention of cognitive neuroscientists because of emerging evidence that something like it might be used in the brain.

By providing these agents with a goal that can only be achieved through some measure of coorperation or under some pressure, issues of conflict and coorperation can by studied in a perfectly controlled computer simulation environment.

-PL 

Grand Challenges of Neuroscience: Day 1

Following up on MC's posts about the significant insights in the history of neuroscience, I'll now take Neurevolution for a short jaunt into neuroscience's potential future.

In light of recent advances in technologies and methodologies applicable to neuroscience research, the National Science Foundation last summer released a document on the "Grand Challenges of Neuroscience".  These grand challenges were identified by a committee of leading members of the cognitive neuroscience community.

The document, available at http://www.nsf.gov/sbe/grand_chall.pdf, describes six domains of research the committee deemed to be important for progress in understanding the relationship between mind and brain.

Over the next few posts, I will discuss each of the research domains and explain in layperson's terms why these questions are interesting and worth pursuing.  I'll also describe potential experimental approaches to address these questions in a cognitive neuroscience framework.

Topic 1:  "Adaptive Plasticity"

One research topic brought up by the committee was that of adaptive plasticity.  In this context, plasticity refers to the idea that the connections in the brain, and the behavior governed by the brain, can be changed through experience and learning.  

Learning allows us to adapt to new circumstances and environments.  Arguably, understanding how we learn and how to improve learning could be one of the greatest contributions of neuroscience.

Although it is widely believed that memory is based on the synaptic changes that occur during long-term potentiation and long-term depression (see our earlier post) this has not been conclusively shown!

What has been shown is that drugs that prevent synaptic changes also prevent learning.  However, that finding only demonstrates a correlation between synaptic change and memory formation,  not causation. (For example, it is possible that those drugs are interfering with some other process that truly underlies memory.)

The overarching question the committee raises is: What are the rules and principles of neural plasticity that implement [the] diverse forms of memory?

This question aims to quantify the exact relationships between changes at the neuronal level and at the level of behavior.  For instance, do rapid changes at the synapse reflect rapid learning?  And, how do the physical limitations on the changes at the neuronal level relate to cognitive limitations at the behavioral level?

Experiments?
My personal opinion is that the answers to these questions will be obtained through new experiments that either implant new memories or alter existing ones (e.g., through electrical stimulation protocols). 

There is every indication that experimenters will soon be able to select and stimulate particular cells in an awake, behaving animal to alter the strength of the connection between those cells.  The experimenters can then test the behavior of the animals to see if their memory for the association that might be represented by that connection has been altered.

-PL 

A Popular but Problematic Learning Rule: “Backpropogration of Error”

elman_treeBackpropogation of Error (or "backprop") is the most commonly-used neural network training algorithm.  Although fundamentally different from the less common Hebbian-like mechanism mentioned in my last post , it similarly specifies how the weights between the units in a network should be changed in response to various patterns of activity.   Since backprop is so popular in neural network modeling work, we thought it would be important to bring it up and discuss its relevance to cognitive  and computational neuroscience.  In this entry, I provide an overview of backprop and discuss what I think is the central problem with the algorithm from the perspective of  neuroscience.

Backprop is most easily understood as an algorithm which allows a network to learn to map an input pattern to an output pattern.  The two patterns are represented on two different "layers" of units, and there are connection weights that allow activity to spread from the "input layer" to the "output layer."  This activity may spread through intervening units, in which case the network can be said to have a multi-layered architecture.  The intervening layers are typically called "hidden layers".

Read more

Neural Network “Learning Rules”

picture-2.png

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I'll introduce some notions of how neural networks can learn.  Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability.

Let's begin with a theoretical question that is of general interest to cognition: how can a neural system learn sequences, such as the actions required to reach a goal? 

Consider a neuromodeler who hypothesizes that a particular kind of neural network can learn sequences. He might start his modeling study by "training" the network on a sequence. To do this, he stimulates (activates) some of its neurons in a particular order, representing objects on the way to the goal. 

After the network has been trained through multiple exposures to the sequence, the modeler can then test his hypothesis by stimulating only the neurons from the beginning of the sequence and observing whether the neurons in the rest sequence activate in order to finish the sequence.

Successful learning in any neural network is dependent on how the connections between the neurons are allowed to change in response to activity. The manner of change is what the majority of researchers call "a learning rule".  However, we will call it a "synaptic modification rule" because although the network learned the sequence, it is not clear that the *connections* between the neurons in the network "learned" anything in particular.

The particular synaptic modification rule selected is an important ingredient in neuromodeling because it may constrain the kinds of information the neural network can learn.

There are many categories of mathematical synaptic modification rule which are used to describe how synaptic strengths should be changed in a neural network.  Some of these categories include: backpropgration of error, correlative Hebbian, and temporally-asymmetric Hebbian.

  • Backpropogation of error states that connection strengths should change throughout the entire network in order to minimize the difference between the actual activity and the "desired" activity at the "output" layer of the network.
  • Correlative Hebbian states that any two interconnected neurons that are active at the same time should strengthen their connections, so that if one of the neurons is activated again in the future the other is more likely to become activated too.
  • Temporally-asymmetric Hebbian is described in more detail in the example below, but essentially emphasizes the importants of causality: if a neuron realiably fires before another, its connection to the other neuron should be strengthened. Otherwise, it should be weakened. 

Why are there so many different rules?  Some synaptic modification rules are selected because they are mathematically convenient.  Others are selected because they are close to currently known biological reality.  Most of the informative neuromodeling is somewhere in between.

An Example

Let's look at a example of a learning rule used in a neural network model that I have worked with: imagine you have a network of interconnected neurons that can either be active or inactive.    If a neuron is active, its value is 1, otherwise its value is 0. (The use of 1 and 0 to represent simulated neuronal activity is only one of the many ways to do so; this approach goes by the name "McCulloch-Pitts").

Read more

Computational models of cognition in neural systems: WHY?

cajal1.png
In my most recent post I gave an overview of the "simple recurrent network" (SRN), but I'd like to take a step back and talk about neuromodeling in general.  In particular I'd like to talk about why neuromodeling is going to be instrumental in bringing about the cognitive revolution in neuroscience.

A principal goal of cognitive neuroscience should be to explain how cognitive phenomena arise from the underlying neural systems.  How do the neurons and their connections result in interesting patterns of thought?  Or to take a step up, how might columns, or nuclei, interact to result in problem solving skills, thought or consciousness?

If a cognitive neuroscientist believes they know how a neural system gives rise to a behavior, they should be able to construct a model to demonstrate how this is the case.

That, in brief, is the answer.

But what makes a good model?  I'll partially answer this question below, but in future posts I'll bring up specific examples of models, some good, some poor.

First, any "model" is a simplification of the reality.  If the model is too simple, it won't be interesting.  If it's too realistic, it will be too complex to understand.  Thus, a good model is at that sweet spot where it's as simple as possible but no simpler.
Second, a model whose ingredients spell out the result you're looking for won't be interesting.  Instead, the results should emerge from the combination of the model's resources, constraints and experience.

Third, a model with too many "free" parameters is less likely to be interesting.  So an important requirement is that the "constraints" should be realistic, mimicking the constraints of the real system that is being modeled.

A common question I have gotten is:  "Isn't a model just a way to fit inputs to outputs?  Couldn't it just be replaced with a curve fitter or a regression?"  Well, perhaps the answer should be yes IF you consider a human being to just be a curve fitting device. A human obtains inputs and generates outputs.  So if you wish to say that a model is just a curve fitter, I will say that a human is, too.

What's interesting about neural systems, whether real or simulated, is the emergence of complex function from seemingly "simple" parts.

In future posts, I'll talk more about "constraints" by giving concrete examples.  In the meantime, feel free to bring up any questions you have about the computational modeling of cognition.
-PL 

[Image by Santiago Ramon y Cajal, 1914.] 

Can a Neural Network be Free…

…from a knee-jerk reaction to its immediate input? Simple Recurrent Network

Although one of the first things that a Neuroscience student learns about is "reflex reactions" such as the patellar reflex (also known as the knee-jerk reflex), the cognitive neuroscientist is interested in the kind of processing that might occur between inputs and outputs in mappings that are not so direct as the knee-jerk reaction. 

An example of a system which is a step up from the knee-jerk reflex is in the reflexes of the sea slug named "Aplysia".  Unlike the patellar reflex, Aplysia's gill and siphon retraction reflexes seem to "habituate" over time — the original input-output mappings are overridden by being repeatedly stimulated.  This is a simple form of memory, but no real "processing" can be said to go on there.

Specifically, cognitive neuroscientists are interested in mappings where "processing" seems to occur before the output decision is made.  As MC pointed out earlier, the opportunity for memory (past experience) to affect those mappings is probably important for "free will". 

But how can past experience affect future mappings in interesting ways? One answer to this question appeared in the year 1990, which began a new era in experimentation with neural network models capable of indirect input-output mappings.  In that year, Elman (inpired by Jordan's 1986 work) demonstrated the Simple Recurrent Network in his paper "Finding Structure in Time".  The concept behind this network is shown in the picture associated with this entry.

The basic idea of the Simple Recurrent Network is that as information comes in (through the input units), an on-line memory of that information is preserved and recirculated (through the "context" units).  Together, the input and context units both influence the hidden layer which can trigger responses in the output layer.  This means that the immediate output of the network is dependent not only on the current input, but also on the inputs that came before it.

The most interesting aspect of the Simple Recurrent Network, however, is that the connections among all the individual units in the network change depending on what the modeler requires the network to output.   The network learns to preserve information in the context layer loops so that it can correctly produce the desired output. For example, if the task of the network is to remember the second word in a sentence, it will amplify or maintain the second word when it comes in, while ignoring the intervening words, so that at the end of the sentence it outputs the target word.

Although this network cannot be said to have "free" will — especially because of the way its connections are forcefully trained — its operation can hint at the type of phenomena researchers should seek in trying to understand cognition in neural systems.

-PL 

Neuroscience Blogs of Note, Part 2

brainsurprise3.pngI will follow up MC's recent post with a brief review of three other neuroscience-related blogs that are worth mentioning as we begin Neurevolution.
Brain Waves (http://brainwaves.corante.com/) is a self-labeled "neurotechnology" blog. Written by Zack Lynch, it is a real-world look at the effects and benefits derived from neuroscience research with regards to society, culture and economics. The author has a background in evolutionary biology but brings to light articles spanning a wide range of topics including neuroeconomics, nanotechnology, pharmaceutical research, perceptual illusions, and music appreciation. The focus of the blog however is on neurotechnology — on technological advancements that permit the improvement or study of the brain.

SCLin's Neuroscience Blog (http://forebrain.blogspot.com/) contains pointers to and summaries of recent neuroscience articles which focus on computational and cognitive neuroscience issues. While too technical to be of much value to the layperson, it reports on articles dealing with cutting-edge questions in neuroscience, such as the nature of the information code in the brain (e.g. meaningful representations in the prefrontal cortex) and information flow through neural pathways (e.g. the short-latency activation of dopaminergic neurons by visual stimuli).

Although It may be hard to classify as a neuroscience blog, Neurophilosophy (http://neurophilosophy.wordpress.com/) does contain a fair number of interesting posts for the neuroscientist, such as a recent one on mind-computer interfaces for robots. Recent articles however have focused more on topics as varied as biological mimicry, hibernation, bionic hands, animals that blow bubbles to smell underwater, and other non-neuroscience topics. As such, it focuses on fascinating science questions in general, of which neuroscience is certainly a part.

This ends our first review.  Stay tuned for further reviews, as well as new content and pointers to other interesting articles!

-PL