Online Via Zoom
Attention is a brain mechanism which focuses on relevant features of incoming stimuli, and is central to perception, learning, and memory. In object-based attention, the brain forms a unitary percept of a sensory object by binding together the different features of the object, and is particularly important in visual perception. Attention is such a critical mechanism in cognition that it is one of the defining features of consciousness. Do attentional mechanisms emerge in machine learning architectures? There is some resemblance to attention in transformers, which are artificial neural network architectures that form the core of modern AI, and have been described as using self-attention in ways that are similar to biological attention. Professor Korning will explore the links between attention in a biological brain and transformers in machine learning. He will discuss how neuroscience and AI can inform each other and whether indeed AI can solve the binding problem.
Professor Kording is a computational neuroscientist whose work lies at the intersection of neuroscience, machine learning, and causal inference. He received his PhD in Physics from ETH Zurich and later trained in neuroscience. He is currently Professor of Bioengineering and Neuroscience at the University of Pennsylvania.
His research focuses on understanding how the brain learns, represents information, and assigns credit—often using machine learning models as tools to test and refine theories of neural computation. He is also a strong advocate of open science and has played a leading role in large-scale educational initiatives such as Neuromatch, aimed at making neuroscience and machine learning more accessible globally.
Professor Ratnam is a neuroscientist with broad interests in brain and behaviour and currently serves as Dean of the Graduate School and Research at Ahmedabad University. His work spans systems neuroscience and cognition, with a focus on understanding how neural circuits support perception and behaviour.