Patrick Steadman MD-PhD student at The University of Toronto

Algorithms and Learning

The last few days at SfN came at me, and maybe you, from all angles. Its been a deluge of information and insight into some amazing work. One ‘theme’ I noticed covered the mechanisms of learning from both the biological and computational perspectives. I call this post:

Synthesizing experts’ theories on learning algorithms and algorithms for learning

These concepts are interesting because how we learn shapes our memories and this interaction is fundamental to how we live and how we persist as a collective species. My own work explores memories in the brain, sometimes referred to as Engrams, whereas here we will explore learning. Though I was happy to see in Dr. Demis Hassabis’s thesis a chapter on Engrams.

Hassabis Engrams, 80%

Presenting in the session ‘From salient experience to learning and memory’, Dr. Andreas Lüthi explored amygdalar circuits during learning to understand algorithmically how the learned association between conditioned stimulus (CS) and unconditioned stimulus (US) is formed.

How associative learning works mechanistically has wide implications within medicine, including associations of an experience with immense anxiety. This is in addition to its relevance to artificial intelligence for building new computer algorithms for learning.

A feature of the amygdala learning algorithm stems from an old observation that principle cells in the BLA region fire very rarely. Lüthi posits this to be strong inhibition from surrounding interneurons. To explore the circuitry that produces this, Lüthi and his lab set out to visualize the activity of interneurons during an associative learning task. During these experiments he finds that VIP interneurons activate to the unconditioned stimulus but that subsequent unconditioned stimuli cause a decrease in VIP activity. This decrease in inhibitory signal correlates with an increasing in freezing, a common behavioral output measure. The work also found that inhibiting VIP activity during associative learning causes reduced freezing later. Therefore to properly form the association VIP interneurons need to fire initially to facilitate the formation of increased activity in the principle neurons, then decrease in firing to allow principle neurons to help trigger the behavioral output. This demonstrates a way interneurons help tune principle neurons to form the BLAs computational circuitry during associative learning.

Moving more broadly, general computational methods for learning are being investigated at Google’s DeepMind, run by Dr. Demis Hassabis. During a lecture he examined the recent interplay between neuroscience and artificial intelligence that has happened recently.

The work at DeepMind, as Hassabis’s lecture showed, recently has been to train neural networks on the game Go, and others, so well as to beat human champions. The learning of difficult games to such an extreme level has taught Go players, for example, new strategies and Hassabis argues this level of learning will broaden our collective intelligence rather than undermine human capacity.

In addition to forming neural networks to learn games, the company has started to develop networks that given 2-3 2D views of an environment can generate the entire 3D space. Right now this works for simplistic spaces but the ability to generate scene’s he hopes will teach us about our own ability to imagine. And together these computer algorithms will inform us about the human experience of learning.

These two lectures explored learning but in very different ways. The biological approach informs us on our natural algorithms and their circuit structure for learning. The computational explores algorithms that mirror our natural ones as well as algorithms that do not. From this we can compare and contrast. Even more interesting is that the computational learning algorithms are able to advance our knowledge, as observed in strategies with Go. This perhaps navigates us around circuit ‘blocks’ in our own learning systems to a better understanding of the natural world.


Patrick E. Steadman, MSc
PhD Candidate, Frankland Lab, The Hospital for Sick Children
MD-PhD Student, University of Toronto
Neuronline: @patrick.steadman
Twitter: @pesteadman
Blog: patricksteadman.ca/blog

comments powered by Disqus