Computational models that are implemented, i.e., written out as equations or software, are an increasingly important tool for the cognitive neuroscientist. This is because implemented models are, effectively, hypotheses that have been worked out to the point where they make quantitative predictions about behavior and/or neural activity.
In earlier posts, we outlined two computational models of learning hypothesized to occur in various parts of the brain, i.e., Hebbian-like LTP (here and here) and error-correction learning (here and here). The computational model described in this post contains hypotheses about how we learn to make choices based on reward.
The goal of this post is to introduce a third type of learning: Reinforcement Learning (RL). RL is hypothesized by a number of cognitive neuroscientists to be implemented by the basal ganglia/dopamine system. It has become somewhat of a hot topic in Cognitive Neuroscience and received a lot of coverage at this past year’s Computational Cognitive Neuroscience Conference.