Research Interests: Auditory Perception, System Neuroscience, Brain and Music, Artificial Intelligence, Brain-Computer-Interface
Professor Chillale is a neuroscientist interested in knowing how the brain converts sensory features in sound (vibrations in the air) to meaningful perceptions and makes appropriate decisions. His work bridges animal and human systems by combining high-density electrophysiology in freely behaving animals with human EEG recordings. He tracks the emergence of sound categories in the auditory cortex and translates those principles to speech, decoding the neural dialogue that links speaking, listening, and imagined speech. Guided by predictive-coding theory, he also probes how musical structure is learned, exposing animals and humans to music and tracking the brain’s anticipatory signals.
Professor Chillale earned his PhD from Jawaharlal Nehru University (JNU, New Delhi) in computational neuroscience. he did postdoctoral work at the Department of Cognitive Sciences in École Normale Supérieure, Paris, where he worked on neural mechanisms of auditory categorisation and effects of attention and context in goal-directed behaviours. He also worked as an Assistant Research Scientist in the Department of Electrical Engineering at the University of Maryland, USA. He employs various methodologies ranging from animal behaviour studies to human recordings, analytical data analysis from machine learning, AI and computational and mathematical modelling. The long view is translational: insights into optimal auditory coding can inform next-generation speech neuro-prostheses and low-cost EEG diagnostics for India’s multilingual classrooms. He aims to couple rigorous theory with open, inclusive neurotechnology to tackle real-world hearing and communication challenges.
Professor Chillale's research spans systems neuroscience, auditory cognition, and neural signal decoding, particularly focusing on how the brain learns, predicts, and categorises complex acoustic input. It integrates electrophysiological recordings in behaving animals, computational modelling, and translational neuroscience using non-invasive methods in humans.
Auditory Perception
Categorical Sound Representations
In his postdoctoral work at ENS Paris, Professor Chillale studied to what extent the primary auditory is involved in encoding perceptual information and how this information is converted to task-relevant decisions. He trained ferrets on a delayed categorisation task and recorded from the primary auditory cortex when ferrets are a. passively listening to the stimuli and b. actively engaged in the task. Then we contrasted the neural population in two conditions. He demonstrated that the primary auditory cortex not only encodes acoustic features but also actively maintains task-relevant sound categories during active listening. Using dimensionality-reduction techniques, it has been revealed how category representations evolve dynamically as a function of behavioural context (eLife, 2023).
Musical Structure and Predictive Coding
Building on predictive coding frameworks, he studied how non-human animals, particularly ferrets, encode higher-order musical structure. By exposing animals to structured music (e.g., Bach chorales), he found neural evidence for statistical learning and musical enculturation. This cross-species paradigm revealed that cortical responses become optimised for temporal regularities in music over time (ARO, talk 2024).
Sequence Encoding and Speech Comprehension
Using ‘roving oddball’ paradigms with nonsense words and syllables, he collaborated with PhD students at the University of Maryland to investigate how the auditory cortex encodes phonemic sequences and detects statistical violations. These findings suggest that the brain uses implicit statistical learning to build representations of temporal regularities, providing insights into how the cortex might encode syllable- and word-level patterns relevant to natural speech comprehension (under preparation, 2025).
Contextual Modulation of Behaviour and Neural Codes
In earlier work, he investigated how animals may acquire task-relevant knowledge even before behavioural expression is evident. By tracking learning-related changes in brain activity and behavioural readouts, he demonstrated a dissociation between acquisition and expression of knowledge, highlighting the role of internal states and context in shaping performance (Nature Communications, 2019).
Systems Neuroscience
Sleep–Wake State Dynamics
During his PhD research, he applied nonlinear time-series techniques to rodent EEG to characterise transitions between sleep and wakefulness. This work demonstrated that brain states are locally distributed and exhibit nonstationary features that can be decomposed using empirical mode decomposition (PLoS ONE, 2013).
Synchronisation and Coupling in Neural Systems
Using models of coupled chaotic neurons, he studied how the strength of coupling and noise can modulate synchrony in cortical circuits. These findings contributed to his understanding of how dynamic neural codes can remain robust despite internal variability (Chaos, 2016).
EEG Dynamics in Brain Injury and Arousal Transitions
In translational studies, he analysed EEG activity during recovery from cardiac arrest in rodent models. He found that therapeutic interventions like metformin influence mitochondrial function and enhance the recovery of EEG signals, suggesting a neuroprotective mechanism at the level of cortical state transitions (The FASEB Journal, 2022).
Artificial Intelligence & Brain–Computer Interfaces (BCI)
Decoding Speaking, Listening, and Imagined Speech
At the University of Maryland, he is working with PhD student to develop neural decoding frameworks that map high-density EEG signals across speaking, listening, and imagined speech. By applying deep neural networks, he is working to uncover a shared latent space across these modalities, with long-term applications for imagined-speech BCIs and speech neuroprostheses (Under preparation).
Disentangling Auditory and Motor Signals in Music Performance
In a multi-modal EEG study, he analysed neural responses while violinists either played, mimed, or listened to music. This design allowed him to isolate motor and sensory contributions to cortical dynamics, offering new insights into sensorimotor integration and cortical plasticity in musical contexts (ARO, 2024).