Contact Us

I-X Seminar Series: Mechanisms of deep learning in the brain with Rafal Bogacz

Key Details:
Registration is
now closed
Recorded Event

Speaker

Rafal Bogacz

Rafal Bogacz graduated in computer science at Wroclaw University of Technology in Poland. Then he did a PhD in computational neuroscience at the University of Bristol, and next he worked as a postdoctoral researcher at Princeton University, USA, jointly in the Departments of Applied Mathematics and Psychology. In 2004 he came back to Bristol where he worked as a Lecturer and then a Reader. He moved to the University of Oxford in 2013.

His research is in the area of computational neuroscience, which seeks to develop mathematical models describing computations in the brain giving raise to our mental abilities. He is particularly interested in modelling the learning processes in the brain, both in the cortex and in the subcortical regions underlying reinforcement learning. He also investigates how treatments involving brain stimulation can be refined to optimize their effectiveness and reduce side effects.

Talk Title

Mechanisms of deep learning in the brain

Talk Summary

For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output – a challenge that is known as credit assignment. It has long been assumed that credit assignment is best solved by backpropagation, which is also the foundation of modern machine learning. This talk will suggest a fundamentally different principle on credit assignment, called prospective configuration. In prospective configuration, the network first infers the pattern of neural activity that should result from learning, and then the synaptic weights are modified to consolidate the change in neural activity. The talk will demonstrate that this distinct mechanism, in contrast to backpropagation, (1) underlies learning in a well-established family of models of cortical circuits, (2) enables learning that is more efficient and effective in many contexts faced by biological organisms, and (3) reproduces surprising patterns of neural activity and behaviour observed in diverse human and rat learning experiments.

Event Recording

Watch on Youtube

More Events

Feb
04

This talk will provide a brief overview of Optimal Transport (OT) and its uses in the development of Machine Learning applications, with the aim of encouraging the adoption of the OT toolbox by those using AI/ML tools in their scientific research and demonstrations of the main OT algorithms in the form of Jupyter Notebooks.