I am a neuroscientist with a broad interest in brain and behaviour. I am particularly interested in neurobiological mechanisms of sensory processing that give rise to perception. The senses that are the focus of my research are the auditory sense, and electrosense in weakly electric fish. Within sensory processing, my work is focused on coding, detection, and estimation of behaviourally relevant signals that are either at the limits of sensory threshold (i.e., extremely weak or faint signals) or corrupted by noise. The processing of near-threshold signals and noisy signals has important survival advantage, and has presumably been subjected to strong selective pressure. At the proximate level, a major consequence of selection is that it may have led to the evolution of an optimal neural code. This key idea has been elaborated in our recent work on an optimum neural code that is high-fidelity and yet energy-efficient. While this research helps to advance our knowledge of coding and signal representation in neurons, the biomedical applications are just as promising. One line of my current research applies the optimal neural code to the development of improved speech processors for cochlear implants. If the optimal code is valid, i.e., if we have captured the correct representation of signals in sensory neurons, then damaged sensory organs or neural tissue can be replaced with bionic interfaces. Thus, this work holds the promise for improved neural implants.
I obtained my doctoral degree in Biophysics from the University of Illinois at Urbana-Champaign, and was on the biology faculty at the University of Texas at San Antonio. Subsequently, I was a Senior Scientist with the University of Illinois where I was affiliated with the college of engineering and the neuroscience program. I obtained my bachelor's degree from the Indian Institute of Technology, Delhi. My engineering roots have deeply influenced my research in biology. It is my belief that biological mechanisms, which are a result of natural selection, reach theoretical performance bounds predicted by mathematical and engineering analysis. In this respect, selection and optimal design are one and the same.
A. Neural coding and sensory signal processing
Background and motivation
Sensing, which is the capability to accurately determine and classify our environment, is central to survival. We are able to navigate our environment, accurately locate things of interest, avoid danger, find food, and so on, because we can map the world we live in - by sight, sound, touch, smell, taste, and the orientation of our body in space. We take our senses for granted until we lose them. Try walking around with one eye closed and one ear blocked, and you will understand that the world becomes harder to negotiate. Some animals have strange, alien senses. Bats can use sound (sonar) to hunt and navigate, pit vipers can image the environment using infrared, honey bees can see ultraviolet, and a type of weakly electric fish (which I study) images the immediate environment using electrical tomography. Irrespective of the type of sense, each organism uses its sensory capabilities to survive. How the nervous system is capable of the extraordinary computations required to accurately map our world in real-time, is one of the fundamental mysteries of brain function. To put this problem in perspective, most of the sensory problems solved by the nervous system have defeated the best engineering approaches. On the other hand, when the engineering solutions are known, the equivalent biological solution is identical and optimal, in an engineering sense. In other words, natural selection has converged to sensory solutions that are probably at the theoretical limits of performance.
Sensory capabilities are phylogenetically old, and have reached extraordinary specialization in vertebrates and invertebrates, with the development of complex sensory end organs such as the inner ear or the retina. These end organs transduce and encode the physical sensory signal (sound, light, mechanical pressure, etc) into neural signals which are then conveyed to the central nervous system (CNS). Within the CNS these neural signals are processed in parallel, hierarchical pathways. Features of the sensory stimulus are extracted by various neural circuits, maps of objects are formed when necessary, and ultimately the sensory environment with all its component objects are perceived as a coherent whole, so that behaviour can be executed. The transduction of sensory signals, the computation of important properties of the signal, and the extraction of signal features are broadly a part of the area known as neural coding.
The benefits obtained from understanding the neural code extend beyond a fundamental understanding of brain mechanisms. They have deep and important applications in neural engineering. For example, the neural code can be used in neural implants to replace damaged sensory organs like the inner ear or retina, or brain tissue damaged by stroke or injury. They could also be used for motor prostheses. The following projects describe fundamental research in sensory neuroscience and also an application to neural implants.
All my research projects are multi-disciplinary. Interested students and postdocs can pick any one approach they like, for e.g., experimental neuroscience, or take a multi-disciplinary approach, for e.g., a mix of experiments, theory, and computations. In addition to biologists, I encourage students and postdocs in psychology, cognitive neuroscience, physics, engineering, and computer science, to join my group.
1. The neural coding of signals
Collaborators: Professor Douglas L. Jones, William L. Everitt Distinguished Professor, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Dr. Erik Johnson, Cognitive and Experimental Neuroscientist, Applied Physics Laboratory, The Johns Hopkins University, Baltimore, MD, USA;
Sensory stimuli can take many different forms. Light is an electromagentic wave, sound is a mechanical pressure wave, smell is mediated by chemicals, etc. Nevertheless, when these stimuli impinge on sensory receptors, they are all converted (transduced) into a sequence of electrical impulses (or spikes) and form a common currency for information processing in the brain. This is analogous to the use of binary digits (0s and 1s) in a computer for storing all sorts of information. One of the accepted ideas about the conversion is that information about the features of the stimulus is contained solely in the timing of these spikes (spike-times). If we know the spike-times then we can recover the stimulus. A major line of my current research is determining how good is this internal representation (the spike-times) when compared to the external stimulus. Or indeed, which aspect of the coding and spike-timing governs the fidelity of the stimulus representation. My collaborator Douglas L. Jones and I argue that if the energy expenditure of the neuron is fixed, then natural selection has led to physiological mechanisms which must generate an optimal code in the sense of being the best possible representation of the stimulus (see Jones et al., 2015; Johnson et al., 2015, 2016). This optimality is the product of selective pressure that trades-off metabolic energy expenditure versus fidelity of coding. We refer to this optimal code as a source code, and the neuron as a source coding neuron. The key testable prediction of this optimal code is that it generates spike-times which can be directly compared to experimentally obtained spike-times. We have partial experimental support for this hypothesis, and a good match between predicted and experimental spikes-times. We will continue to investigate the code further in the electrosensory system of weakly electric fish. This is a rich problem that intersects with coding of digital signals and information theory. This project is an active, ongoing collaboration with Dr. Jones. It is open to students of biology, engineering, and computer science.
2. Neural detection and estimation of signals in noise
Collaborators: Professor Douglas L. Jones, William L. Everitt Distinguished Professor, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Dr. Erik Johnson, Cognitive and Experimental Neuroscientist, Applied Physics Laboratory, The Johns Hopkins University, Baltimore, MD, USA; Mr. Robin Singh Sidhu, Graduate Student, University of Illinois at Urbana-Champaign, Urbana, IL, USA;
Former collaborators: Professor Mark E. Nelson, Molecular and Integrative Physiology, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Dr. Jozien Goense, Senior Research Fellow, Department of Psychology, University of Glasgow, Glasgow, UK.
It is often believed that a single neuron by itself is unreliable and noisy, and not capable of representing stimuli with any degree of fidelity. The argument goes that fidelity is guaranteed only when we take many neurons and sum their representations so as to cancel noise. Further, given that the nervous system has "billions and billions" of neurons (a la Carl Sagan) a single or even a few neurons do not matter. Our argument based on the optimal code contradicts this belief and states that a neuron is a remarkably precise device that is capable of coding information with great precision and fidelity. Thus, every neuron matters. Our earlier results on the source coding neuron establishes the spike-timing code as being the best possible representation, i.e., providing the best possible estimate of the sensory stimulus. The question is whether it is also capable of detecting sensory stimuli that are close to threshold (i.,e., extremely weak or faint signals buried in background noise). We encounter this problem often: did the phone ring (or did the baby cry) when we are taking a shower? Was that faint rustling sound in the grass a tiger stalking a prey? These important signals are so close to the limits of hearing that we can easily miss them and pay a heavy price in terms of survival. Some years ago, I showed that electrosensory neurons in weakly electric fish appear to be noisy in terms of their spike-times, but the statistical structure of spike-times suppresses noise by damping random fluctuations, and makes weak signals pop-out of the noise (in comparsion with Poisson-like spike-times where the noise is undamped and capable of large excurisons from the mean, thereby completely obscuring the signal) (Ratnam and Nelson, 2000; Goense and Ratnam, 2003; Ratnam et al., 2003a; Goense et al., 2003). We argued that the statistical properties of the spike-times is fundamental to optimal detection (i.e., it maximizes signal detection probability for a fixed probability of false-alarms). As an aside, such processes are found in the field of economics and finance, and are referred to as mean-returning processes. In recent extensions of the source coding neuron, Doug Jones and I showed that the statistical structure of the experimentally observed spike times is a consequence of optimal source coding, and thus, this optimal code also facilitates optimal estimation. Using theory and experiments in the electrosensory system of weakly electric fish, we will show that the statistical structure of spike-times from electrosensory primary neurons are shaped by endogenous and exogenous influences. This project is an active, ongoing collaboration with Dr. Jones. It is open to students of biology, engineering, and computer science.
3. Biophysical conductance based models of optimal neural coding
Collaborators: Mr. Alexander Asilador, Doctoral student, The Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
The central ideas underlying the optimal neural code (the source code) are axiomatic, and rely on an energy-fidelity trade-off. What are the biophysical substrates that are responsible for the code? The seminal work of Hodgkin and Huxley (physiologists at Cambridge University) in the 1940s and 1950s, showed that two opposing ionic currents, an inward sodium or Na+ current, and an outward potassium or K+ current, across the cell membrane of neurons generates the impulsive or spiking activity. This is the so-called spike generator. However, while this work showed how a single impulse or spike is generated, it did not specify how spike-timing is established. The current consensus is that numerous other outward K+ currents (mediated by voltage and calcium-dependent ion channels) are responsible for regulating the timing of spikes. However, this is a qualitative idea and so far there has been no principled understanding of its relevance to coding. We argue that a single K+ current (the so-called muscarinic or M-current regulated by a voltage-dependent potassium channel) is the key regulator of spike-timing, and further, it establishes the baseline energy-constraint against which fidelity is traded-off (Jones et al., 2015). Further, the M-current is the major ionic current that regulates the quality or fidelity of coding. The idea is parsimonious because it requires only a single K+ channel (the M-channel) for optimal coding. Crucially, this channel is ubiquitous in the region of the neuron called the spike-initiation region where spikes are generated. Using a computational approach that extends Hodgkin and Huxleys coupled nonlinear differential equations, to incorporate the M-current, we are able to estimate the parameters of the nonlinear system, and predict the time-dependent membrane potential and spike-times of cortical pyramidal neurons (obtained from the public domain) with good accuracy. As a first step we have shown that the M-current is the most significant outward current responsible for spike-timing, and other K+ currents are not necessary. Our ongoing and future work will show that this biophysical model is a source coding neuron, i.e., it is the biophysical basis for an optimal neural code. This ongoing project is open to students of biology, engineering, computer science, and physics.
4. Spike-timing based speech vocoding for cochlear implants
Collaborators: Professor Douglas L. Jones, William L. Everitt Distinguished Professor, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Professor Justin Aronoff, Speech and Hearing Science, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Dr. Erik Johnson, Cognitive and Experimental Neuroscientist, Applied Physics Laboratory, The Johns Hopkins University, Baltimore, MD, USA; Mr. Alexander Asilador, Doctoral student, The Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Ms. Hannah Staisloff, Doctoral student, Speech and Hearing Science, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
The single most important testable idea of an optimal neural code is that it will accurately predict the spike-times obtained experimentally. We have data supporting this idea and are carrying out further tests. From a biomedical perspective, if we can predict spike-times, then we have captured the neural code. In principle, we should then able to replace sensory neurons (such as those that convey sound information from the inner ear) with prosthetic neurons that mimic biological neurons, and the nervous system should not be able to tell the difference. The most successful neural prosthetic to date is the cochlear implant, traditionally offered to children who have severely impaired hearing. The cochlear implant is an electrode in the shape of a wire that is inserted into the cochlea (the sound organ in the inner ear). This wire is in close proximity to the sensory neurons that are responsible for transmission of sound information through the auditory nerve. An incoming sound pattern is converted into a spatial pattern of electrical activity and delivered to the wire which stimulates the sensory neurons in a manner that is presumed to be similar to the intact cochlea. This device is immensely successful and has benefited many children with hearing loss. However, it does not deliver stimulation that is similar to real neurons because it does not use a spike-timing code. Additionally, from a behavioral perspective, listeners are unable to appreciate music or determine the spatial location of a sound (called directional hearing). So, while speech perception is greatly improved, there is scope for improving the performance of the device. We believe that an optimal neural code that captures the electrical activity of real auditory neurons using a spike-timing code, is more likely to provide improved pitch and music perception, and improved directional hearing. In collaboration with Professors Doug Jones and Justin Aronoff we are carrying out speech vocoder tests (i.e., speech that has been analyzed using the spike-timing code, and then resynthesized) on normal listeners to prove the concept. Our preliminary results show improved binaural performance and improved directional hearing. Further tests are being conducted, along with a planned extension of the study to music perception. Eventually we will have a tie-up with a cochlear implant manufacturer and test the speech vocoder on cochlear implant users. The penetration of cochlear implants in India is poor (in comparison with other countries). This is cause for concern because India has a large population of young children, and thus a larger number of children likely to be suffering from hearing loss. The proposed device is attractive because it does not require any change in the surgical procedures for implantation or a change in the device. It can be implemented in software in the digital speech processor built into the implant. This enables signficant cost-to-performance benefits. This ongoing project is open to students interested in biomedical engineering, audiology, and the physiology and psychology of human hearing.
B. Vocal-communication behavior in songbirds and anurans
1. Song variation in the Golden-cheeked warbler (Setophaga chrysoparia)
Collaborators: Ms. Wendy Leonard, Nature Preserve Officer, City of San Antonio Parks and Recreation Natural Areas, San Antonio, TX, USA; Ms. Jewell Cozort, Park Naturalist, San Antonio Natural Areas, San Antonio, TX, USA.
The Golden-cheeked warbler (Setophaga chrysoparia, formerly Dendroica chrysoparia) or GCW is a critically endangered parulid warbler endemic to the Edwards Plateau in central Texas. It over-winters in Central America and returns to breed in Edwards Plateau in spring and summer. Like all parulids the GCW has a two-song system (Song A and Song B) used for advertisement and courtship (Song A) and for territorial and male-male interactions (Song B). In 2009, Jayne Neale and Wendy Leonard observed that Song B had undergone variation in one of the syllables in several birds observed in the San Antonio region (Leonard et al, 2010). The variation included a significant reduction in the syllable bandwidth and modal frequency. To fully determine the extent of song variation in time and across the region, we condcuted a field study with Wendy Leonard and Jewell Cozort, and recorded GCW songs across several counties in the south Edwards Plateau region. Our data indicate that, overall, Song B has undergone a significant broadening and shift in the spectral bandwidth to lower frequencies, with extensive modifications (including frequency modulation) to one syllable. Further, while this is true across the counties we studied, our data suggest that birds in neighboring counties exhibit more similar song than birds from geographically distant counties. The variation in song has implications for conservation of this critically endangered parulid because song is a key identifier in population surveys, and further, can provide important clues on habitat degradation and anthropogenic influences. This collaborative project with Wendy Leonard is ongoing. This ongoing project is open to students of biology (with an interest in the ecology and ethology of vocal communication behaviour), engineering, and computer science.
2. Male-male interactions in frog and toad choruses
Collaborators: Professor Douglas L. Jones, William L. Everitt Distinguished Professor, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Mr. Russell L. Jones, Design Engineer, Sprite Robotics, Research Park, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
In many species of chorusing frogs, callers can rapidly adjust their call timing with reference to neighboring callers so as to maintain call rate while minimizing acoustic interference. The rules governing the interactions, in particular, who is listening to whom are largely unknown, presumably influenced by distance between callers, caller density, and intensities of interfering calls. In collaboration with Doug Jones, we developed a microphone array technique that: 1) localizes individual frogs in the dark by voice alone, and 2) employs an adaptive (spatial) beamforming technique to selectively extract the voice of each individual frog while suppressing the calls of others (Jones and Ratnam, 2009). In further collaboration with Russell Jones (Jones et al, 2014), we recorded vocal interactions in a unison bout caller, the American green tree frog (Hyla cinerea) focusing on a local group of six callers embedded in a larger chorus. Callers within this group were localized and their voices were separated using the localizer and adaptive beamformer for analysis of spatio-temporal interactions. We show that callers in this group: (1) synchronize with one another, (2) prefer to time their calls antiphonally, almost exactly at one-third and two-thirds of the call intervals of their neighbors, (3) tolerate call collision when antiphonal calling is not possible, and (4) perform discrete phase-hopping between three preferred phases when tracking other callers. Further, call collision increases and phase-locking decreases, with increasing inter-caller spacing. We conclude that the precise phase-positioning, phase-tracking, and phase-hopping minimizes acoustic jamming while maintaining chorus synchrony. This ongoing project is open to students of biology (with an interest in the ecology and ethology of vocal communication behaviour), engineering, and computer science.
C. Cognitive neuroscience and human performance
1. Multimodal analysis of human task-load performance
Collaborators: Professor Stefan Winkler, Associate Professor, School of Computing, National University of Singapore; Deputy Director AI Technology, AI Singapore, Singapore; Mr. Jannis Born, Institute of Neuroinformatics, ETH Zurich and University of Zurich, Switzerland; Mr Babu R. N. Ramachandran, Advanced Digitals Sciences Center, Illinois at Singapore, Singapore; Ms. Sandra A. Romero, Advanced Digitals Sciences Center, Illinois at Singapore, Singapore.
Assessment of work load is an important factor in many office situations especially when task demand and speed of execution (i.e., task time) are loading factors. For example, in a visual search task involving sorting of numbers, such as postal codes, sorting accuracy can be compromised when the loading factors are either systematically manipulated or manipulated in a random manner. The event-related potential (ERP) which is based on recording of scalp potentials (electroencephalogram or EEG) has been widely used to study the impact of task loading on performance. Here we determine whether prediction performance can be improved when we simultaneously measure multiple physiological variables. These measures include ERPs, eye-tracking (ET), and galvanic skin response (GSR). We developed a visual search paradigm mimicking postal code sorting where a random 5-digit number that appeared on the computer monitor had to be sorted into one of six bins displayed on the same monitor. These bins were delimited by a numerical range, one of which included the number. The color coding of the bins, the layout of the bins, and the time allotted for sorting a number were either kept constant or varied (giving rise to 23 = 8) blocks of trials. Task load was lowest when all three variables were unchanged, and greatest when all three were randomly varied. The ERP, ET, and GSR measurements were made simultaneously and the NASA-TLX questionnaire was administered at different time-points. The NASA-TLX results were in broad agreement with our hypothesized task difficulty. We employed a multimodal linear regression model that adds independent variables from the pool of features (ERP as a function of scalp position and spectral bandwidth, tonic and phasic GSR responses, eye-blink duration, eye-fixation duration, and eye-fixation position). While a full model incorporating all features had the best performance, the combination of ERP and GSR features had nearly the same prediction performance as the full model. ET in combination with either ERP or GSR was less effective as a predictor. We conclude that multimodal physiological measurements such as GSR which are relatively easy to setup and measure can augment ERP mesurements and provide improvements in task-load classification performance. This ongoing project is open to students of psychology and cognitive neuroscience, and those interested in neural engineering.
2. Fall risk assessment in the elderly using a camera-based depth sensor
Collaborators: Professor Jacob Sosnoff, Associate Professor, Kinesiology, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Professor Stefan Winkler, Associate Professor, School of Computing, National University of Singapore; Deputy Director AI Technology, AI Singapore, Singapore; Mr. Vignesh Paramathayalan, Research Engineer, Advanced Digital Sciences Center, Illinois at Singapore, Singapore; Dr. Ruopeng Sun, Postdoctoral Research Associate, Kinesiology, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Dr. Sanjiv Jain, MD., Carle Foundation Hospital, Urbana, IL, USA.
Falls are a leading cause of death and injury in older adults. The assessment of fall risk is currently based on self-reporting or subjective judgement and tends to be unreliable. While there are many reliable tests for fall risk assessment, they are performed in clinical settings, require expensive instrumentation, and the presence of clinical practitioners for carrying out the assessment. The availability of low-cost camera-based depth sensors that can accurately report joint positions, has the potential to be a useful tool for fall risk assessment. We used a Kinect (Microsoft Inc) camera that can capture 20 major joints of the human body, at a frame-rate of 30 frames per second. Elderly participants were made to perform standard tests such as static balance tests with eyes open and closed, standing on the floor or on a block of foam, the five times sit-to-stand test (5STS), and four-square step test (4ST). Motion analysis was performed by custom-written software. Participants were additionally assessed in the conventional manner by using a force platform and a sway meter to measure postural sway in the anterior-posterior (AP), the medial-lateral (ML) axes, and 95% ellipse, and the center of pressure. Our results demonstrate a high degree of correlation between the hip position as reported by the Kinect camera and AP sway reported by the force platform. ML sway was more prone to error. Automated measurements of the time taken for 5STS and 4ST were in agreement with conventional measurements. We conclude that a Kinect camera-based system for assessing fall risk provides results that are comparable to conventional techniques, is inexpensive, touch-free, fully automated, and can be readily installed at home, community centers, or in other public places, for regular testing of people at risk for falling. The system is currently in use at an orthopedic facility in a hospital in Urbana, IL (USA). This project is open to students interested in kinesiology, biomechanics, and computer vision (for motion capture).
Hsieh KL, Moon Y, Ramkrishnan V, Ratnam R, Sosnoff JJ (2019). Validating virtual time to contact using home based technology in young and older adults. J. Appl. Biomech. 35:61-67.
Sun R, Aldunate R, Paramathayalan VR, Ratnam R, Jain S, Morrow DG, Sosnoff JJ (2019). Preliminary evaluation of a self-guided automated fall risk assessment tool for older adults. Arch. Gerontol. Geriatrics. 82: 94-99.
Johnson EC, Jones DL, Ratnam R (2016). A minimum-error, energy-constrained neural code is an instantaneous rate-code. J. Comput. Neurosci. 40:193-206.
Watson PD, Horecka K, Cohen NJ, Ratnam R (2016). A phase-locked loop epilepsy network emulator. Neurocomputing 173:1245-1249.
Jones DL, Johnson EC, Ratnam R (2015). A stimulus-dependent spike threshold is an optimal neural coder. Front. Comput. Neurosci. 9:61.
Jones DL, Jones RL, Ratnam R (2014). Calling dynamics and call synchronization in a local group of unison bout callers. J. Comp. Physiol. A. 200:93-107.
Petacchi A, Kaernbach C, Ratnam R, Bower JM (2011). Increased activation of the human cerebellum during pitch discrimination: A Positron Emission Tomography (PET) study. Hear. Res. 282: 35-48.
Valero MD, Ratnam R (2011). Reliability of distortion product otoacoustic emissions in the common marmoset (Callithrix jacchus). Hear. Res. 282: 265-271.
Tardif SD, Mansfield K, Ratnam R, Ross C, Zielger TE (2011). The marmoset as a model of aging and age-related diseases. ILAR J. 52(1): 54–65.
Leonard WJ, Neal J, Ratnam R (2010). Variation of Type B song in the endangered Golden-cheeked warbler (Dendroica chrysoparia). The Wilson J. Ornithol. 122(4): 777-780.
Jones DL, Ratnam R (2009). Blind location and separation of callers in a natural chorus using a microphone array. J. Acoust. Soc. Am. 126(2): 895-910.
Valero MD, Pasanen EG, McFadden D, Ratnam R (2008). Distortion product otoacoustic emissions in the common marmoset (Callithrix jacchus): Parameter optimization. Hear. Res. 243: 57-68.
Phatak SA, Ratnam R, Wheeler BC, O’Brien Jr. WD, Feng AS (2006). Effect of reflectors on sound source localization with two microphones. J. Audio Eng. Soc. 54(6): 512-524.
Ratnam R, Jones DL, O’Brien Jr. WD (2004). Fast algorithms for blind estimation of reverberation time. IEEE Sig. Proc. Lett. 11(6): 537-540.
Goense JBM, Ratnam R (2003). Continuous detection of weak sensory signals in afferent spike trains: The role of anti-correlated interspike intervals in detection performance. J. Comp. Physiol. A. 189(10): 741-759.
Ratnam R, Jones DL, Wheeler BC, Lansing CL, O’Brien Jr. WD, Feng AS (2003b). Blind estimation of reverberation time J. Acoust. Soc. Am. 114(5): 2877-2892.
Ratnam R, Goense JBM, Nelson ME (2003a). Change-point detection in neuronal spike train activity. Neurocomputing 52-54: 849-855.
Goense JBM, Ratnam R, Nelson ME (2003). Burst firing improves the detection of weak signals in spike trains. Neurocomputing 52-54: 103-108.
Ratnam R, Nelson ME (2000). Non-renewal statistics of electrosensory afferent spike trains: implications for the detection of weak sensory signals. J. Neurosci. 20(17): 6672-6683.
Feng AS, Ratnam R (2000). Neural basis of hearing in real-world situations. Ann. Rev. Psychol. 51: 699-725.
Dhingra AK, Zhang M, Ratnam R, Suri D (1999). A coarse-grained parallel homotopy for mechanism design. Int. J. High. Perform. Comput. Appl. 13: 303-319.
Ratnam R, Feng AS (1998). Detection of auditory signals by frog inferior collicular neurons in the presence of spatially separated noise. J. Neurophysiol. 80: 2848-2859.
Ratnam R, Condon CJ, Feng AS (1996). Neural ensemble coding of target identity in echolocating bats. Biol. Cybern. 75: 153-162.
Weissenstein L, Ratnam R, Anastasio TJ (1996). Vestibular compensation in the horizontal vestibulo-ocular reflex of the goldfish. Behav. Brain Res. 75: 127-137.
Ratnam R, Anastasio TJ (1995). Evidence for a cooperative learning mechanism in the vestibulo-ocular reflex. NeuroReport 6(16): 2129-2133.
Ratnam R, Patwardhan VS (1993). Analyze single and multipass heat exchangers. Chem. Eng. Prog. 89: 85-91.
Ratnam R, Patwardhan VS (1991). Sensitivity analysis for heat exchanger networks. Chem. Eng. Sci. 46: 451-458.
Ratnam R, Viswanathan, K, Mani BP (1985) Use of Andreasen apparatus for size determination of milk powder. Dairy J. (Published while an undergraduate).
Ratnam R, Viswanathan K, Mani BP (1984). Studies on attrition in fluidized beds. J. Pow. & Bulk Sol. Tech. 8: 1-9. (Published while an undergraduate).
Viswanathan K, Ratnam R (1983). An equation to predict the relative velocity between liquid and solid in a hydraulic transportation pipeline. Ind. Chem. Engr. 25(3 & 4): 49-50. (Published while an undergraduate).
Ramachandran BRN, Pinto SAR, Born J, Winkler S, Ratnam R (2017). Measuring Neural, Physiological and Behavioral Effects of Frustration. In: Goh J, Lim C, Leo H. (Eds.) The 17th International Conference on Biomedical Engineering. IFMBE Proceedings, Vol 61. Springer, Singapore.
Winkler S, Zhu Y, Subramanian R, Ng T-T, Ratnam R (2016). Comparison of human and machine performance for copy-move image forgery detection involving similar but genuine objects. IEEE Region 10 Technical Conference: Technologies for Smart Nation (TENCON 2016).
Johnson EC, Jones DL, Ratnam R (2015a). Minimum squared-error, energy-constrained encoding by adaptive threshold models of neurons. Proceedings IEEE International Symposium on Information Theory (ISIT 2015), pp 1337-1341.
Jun DM, Jones DL, Coleman TP, Leonard W, Ratnam R (2012). Practical sensor management for an energy-limited detection system. Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing. (ICASSP 2012), pp. 1641-1644.
Sun R, Paramathayalan V, Ratnam R, Jain S, Morrow D, Sosnoff JJ (2018). Design and development of an automated fall risk assessment system for older adults. In Pak R and McLaughlin AC (Eds.), Aging, Technology and Health. New York, Elsevier.
Ratnam R, Jones DL, Wheeler B, Lansing CL, O’Brien W, Bilger R, Feng AS. Determining Reverberation Time. United States Patent 20040213415 (October 28, 2004). International Patent WO/2004/097351 (November 11, 2004).
Johnson EC, Lee DH, Jones DL, Aronoff JM, Ratnam R (2017). A neural timing code improves speech perception in vocoder simulations of cochlear implant sound coding. Conference on Implantable Auditory Prostheses (CIAP). Lake Tahoe, CA.
Asilador AR, Johnson EC, Ratnam R (2017). An outward potassium current (M-current) is an estimate of the neuron's input. Soc. Neurosci. Abstr. Washington DC.
Valero MD, Ratnam R (2017). The common marmoset (Callithrix jacchus) as a model for age-related hearing loss. The 40th Meeting of the American Society of Primatologists, Washington DC. (Oral presentation by MDV).
Johnson EC, Lee DH, Jones DL, Aronoff J, Ratnam R (2017). A Neural Timing Code Improves Speech Perception in Vocoder Simulations of Cochlear Implant Sound Coding. 40th Annual Mid-Winter Meeting, Association for Research in Otolaryngology, Baltimore, MD.
Paramathayalan VR, Kinnett-Hopkins D, Winkler S, Motl R, Ratnam R (2016). A real-time system to ensure compliance in home-based tele-rehabilitation. The 16th International Conference on Biomedical Engineering (ICBME), Singapore.
Ramachandran BRN, Pinto SAR, Born J, Ratnam R, Winkler S (2016). Measuring neural, physiological, and behavioral effects of frustration. The 16th International Conference on Biomedical Engineering (ICBME), Singapore.
Paramathayalan VR, Moon Y, Winkler S, Sosnoff J, Ratnam R (2016). Fall risk assessment using Microsoft Kinect. 6th International Symposium on InfoComm & Mechatronics Technology in Bio-Medical & Healthcare Application (IS 3T-in-3A), Singapore.
Moon Y, Wittery N, Roeing K, Amouyal Y, Ramkrishnan V, Khromenkov A, Winkler S, Ratnam R, Sosnoff JJ (2016). Virtual time to contact assessment with home based technology, 40th Annual Meeting of the American Society of Biomechanics (ASB40), Raleigh, NC.
Moon Y, Roeing K, Amouyal Y, Ramkrishnan V, Ratnam R, Sosnoff JJ (2016). Fall risk assessment: The potential for home-based technology. 93rd Annual Conference of the American Congress of Rehabilitation Medicine, Chicago, IL.
Ratnam R, Johnson EC, Jones DL (2016). Optimal sensory coding: Lessons from the active electroreceptor afferents of weakly electric fish. Symposium on Weakly Electric Fish, XII International Congress of Neuroethology (ICN 2016), Montevideo, Uruguay.
Jones DL, Jones RL, Ratnam R (2015). The spatio-temporal analysis of male-male vocal interactions in chorusing frogs. 170th Meeting of the Acoustical Society of America, Jacksonville, FL. (Invited talk by RR at the symposium in honor of Albert Feng)
Johnson EC, Jones DL, Ratnam R (2015b). An Optimal Neural Encoding Model Predicts Anti-correlated Spike-Trains in the P-type Afferents of a Weakly Electric Fish. Soc. Neurosci. Abstr., Chicago.
Horecka K, Watson P, Ratnam R, Cohen NJ (2015). A phase-locked loop oscillatory memory model for characterizing canine seizure electrocorticography data. Soc. Neurosci. Abstr., Chicago.
Johnson EC, Jones DL, Ratnam R (2015a). Minimum squared-error, energy-constrained encoding by adaptive threshold models of neurons. IEEE Internatonal Symposium on Information Theory (ISIT 2015), Hong Kong. (Oral presentation by ECJ).
Johnson EC, Jones DL, Ratnam R (2015c) A minimum-error, energy-constrained neural encoder predicts an instantaneous spike-rate code. 24th Annual Computational Neuroscience Meeting (CNS 2015), Prague, Czech Republic.
Watson PD, Horecka P, Ratnam R, Cohen NJ (2015). A phase-locked loop epilepsy network emulator for localizing, forecasting, and controlling ictal activity. 24th Annual Computational Neuroscience Meeting (CNS 2015), Prague, Czech Republic.
Horecka K, Watson P, Ratnam R, Cohen NJ (2015). Capturing and driving neural oscillations with a phase-locked loop model-driven oscillator. Cog. Neurosci. Soc., San Francisco.
Johnson EC, Jones DL, Ratnam R (2014). Encoding of sensory signals by an energy-constrained neural source encoding mechanism. Soc. Neurosci. Abstr., Washington DC.
Watson P, Horecka K, Ratnam R, Cohen NJ (2014). Capturing and driving neural oscillations with a phase-locked loop controller. IEEE EMBS BRAIN Grand Challenges Conf., Washington DC.
Valero MD, Ratnam R (2013). The Aging Auditory System of the Common Marmoset (Callithrix jacchus). Assoc. Res. Otolaryngol. Mid-Winter Meeting, Baltimore, MD.
Jones DL, Jones AL, Ratnam R (2012). Accurate localization over large areas with minimal arrays. J. Acoust. Soc. Am. 132(3): 2035.
Valero MD, Ratnam R (2012). The auditory system of the aging common marmoset (Callithrix jacchus). 35th Meeting of the American Society of Primatologists, Sacramento, CA.
Lam K, Cortez N, Ratnam R (2011). Antiphonal calling in single-housed free-behaving marmosets. The 34th Meeting of the American Society of Primatologists, Austin, TX. (Oral presentation by KL).
Valero MD, Pasanen EG, McFadden D, Ratnam R (2011). Distortion product otoacoustic emissions in the common marmoset: Parameter optimization and reliability. The 34th Meeting of the American Society of Primatologists, Austin, TX. (Oral presentation by MDV).
Valero MD, Ratnam R (2011). Reliability of distortion-product otoacoustic emissions in the common marmoset (Callithrix jacchus). Assoc. Res. Otolaryngol. Mid-Winter Meeting, Baltimore, MD.
Petacchi A, Kaernbach C, Ratnam R, Robin D, Bower J (2010). Enhanced activation of cerebellar regions during pitch discrimination in humans: A PET study. Nineteenth Annual Computational Neuroscience Meeting CNS*2010, San Antonio, TX. BMC Neuroscience 2010, 11 (Suppl. 1): P84.
Petacchi A, Kaernbach C, Ratnam R, Robin D, Bower J (2010). A Positron Emission Tomography (PET) test for an enhanced role for the cerebellum during pitch discrimination. Assoc. Res. Otolaryngol. Mid-Winter Meeting, Annaheim, CA.
Jones DL, Ratnam R (2009). Dynamical interactions in a green treefrog chorus. J. Acoust. Soc. Am. 126(4): 2270.
Ratnam R, Jones DL (2008). Localizing and extracting individual calls in an anuran chorus using a microphone array. The 2nd International Conference on Acoustic Communication by Animals, Corvallis, OR.
Jones DL, Jones ML, Ratnam R (2008). Localization and extraction of frog calls from a chorus using an acoustic beamformer. Assoc. Res. Otolaryngol. Mid-Winter Meeting, Phoenix, AZ.
Valero MD, Ratnam R (2008). Distortion product otoacoustic emissions in the common marmoset (Callithrix Jacchus): Parameter optimization, normative and comparative findings. Assoc. Res. Otolaryngol. Mid-Winter Meeting, Phoenix, AZ.
Jones DL, Walsh KA, Jones ML, Ratnam R. (2007) Localizing Chorusing frogs and toads using a microphone array. Satellite symposium on Frog Hearing and Acoustic Communication at the 8th International Congress of Neuroethology. Vancouver BC, Canada..
Valero M, Pasanen E, Layne D, Tardif S, McFadden D, Ratnam R (2007) Otoacoustic emissions in the common marmoset. Assoc. Res. Otolaryngol. Mid-Winter Meeting, Devner, CO.
Walsh KA, Ratnam R (2007). Localizing bioacoustic sources with a microphone array. Assoc. Res. Otolaryngol. Mid-Winter Meeting, Devner, CO.
Ratnam R, Goense JBM (2004) Variance stabilization of spike trains via non-renewal mechanisms – The impact on the speed and reliability of signal detection. Computational Neuroscience Meeting (CNS*04), Baltimore, MD, USA.
Ratnam R, Iyer N, Goense JBM, Feng AS. (2004) Effect of reverberation on neural response in the auditory midbrain. Assoc. Res. Otolaryngol. Mid-Winter Meeting, Daytona Beach, FL.
Ratnam R, Jones DL, Wheeler BC, Feng AS (2003) Online estimation of room reverberation time. J. Acoust. Soc. Amer. 113:2269.
Iyer N, Ratnam R, Phatak S, Lansing CR, Feng AS. (2003) Speech perception in tight acoustic spaces. J.Acoust.Soc. Amer. 113: 2286.
Goense J, Ratnam R. (2002) The segregation of frog calls in a natural auditory scene. Gordon Research Conference on "Sensory Coding and the Natural Environment", South Hadley, MA, USA.
Goense JBM, Ratnam, R, Nelson ME (2001) Interspike interval correlations can be exploited by neurons to detect weak signals in noisy spike trains. 6th International Congress of Neuroethology, Bonn, Germany. 6:199.
Ratnam R, Goense JBM, Nelson ME (2001) The response of P-type electrosensory afferents to weak prey-like stimuli. 6th International Congress of Neuroethology, Bonn, Germany. 6:194.
Ratnam R, Goense JBM, Nelson, ME (2000) The detection of weak sensory signals in electrosensory afferent spike trains. Society for Neuroscience Abstracts, 26.
Ratnam R, Nelson ME (1999) Impact of afferent spike train irregularity on the detection of weak sensory signals. Computational Neuroscience Meeting (CNS*99), Pittsburgh, PA, USA.
Ratnam R, Feng AS (1998) Spatially mediated release from masking in midbrain auditory neurons. 5th International Congress of Neuroethology, San Diego, CA, USA.
Gooler DM, Ratnam R, Lin W-Y, Feng AS (1997) Sound direction influences the ability of midbrain auditory neurons to detect signals embedded in noise. Soc. Neurosci. Abstr. 23: 733.
Feng AS, Gooler DM, Lin W-Y, Ratnam R (1996) Separation of sound sources facilitates extraction of temporal features of sounds embeddedin noise by neurons in the frog auditory midbrain. Sensorimotor Coordination Workshhop, Flagstaff, AZ, USA.
Ratnam R, Anastasio TJ (1994) Evidence for a stochastic mechanism in compensation for VIIIth nerve section in the vestibulo-ocular reflex of the goldfish. Soc. Neurosci. Abstr. 20: 1191.
Weissenstein L, Ratnam R, Anastasio TJ (1994) Systems analysis and neural network modeling of vestibular compensation. Whitaker Conference Abstracts.
Ratnam R, Viswanathan K, Mani BP (1984) Attrition of silica gel and calcite in fluidized beds. The 37th Indian Inst. Chem Engr. Conference, New Delhi, India.
Behavioural Neurobiology (BIO 555): Graduate level (integrated MSc and PhD), instructor. School of Arts and Sciences, Biological and Life Sciences, Ahmedabad University, Ahmedabad. Offered in Monsoon 2019.
Brain and Behavior (BIO 4813): Upper division, undergraduate, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Fall 2012, Spring 2013.
Sensory Physiology (BIO 5503): Graduate level, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Fall 2007, 2008, 2009, 2010, 2011, 2012.
Advanced Physiology (BIO 3413): Upper division, undergraduate, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Spring 2011, 2012, 2013 (Honors).
General Physiology (BIO 3413): Upper division, undergraduate, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Fall 2005, Spring 2005, 2006, 2007, 2009, 2010.
Electrical Resonance in Neurons (BIO 7041): Graduate level, Special Topics, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Spring 2011.
Ion Channels in the Cochlea (BIO 7041): Graduate level, Special Topics, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Spring 2010.
The Physiology of Hair Cells (BIO 7041): Graduate level, Special Topics, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Spring 2009.
The Physiology of Sound Localization (BIO 7041): Graduate level, Special Topics, Instructor. Department of Biology, University of Texas at San Antonio. Offered in Spring 2008.
Basic Statistics (STA 1053): Freshmen level, undergraduate, Instructor. Department of Management Science & Statistics, University of Texas at San Antonio. Offered in Spring 2008.
Introductory Human Physiology (MCB 104): Freshmen level, undergraduate, Laboratory instructor (Course director, Professor E. Meisami). Department of Molecular & Integrative Physiology, University of Illinois, 1996, 1997.
Guest lectures and other teaching
Machine Learning for Signal Processing (CS 598PS): Graduate level. Guest lectures on neural coding. Department of Computer Science, University of Illinois at Urbana-Champaign. Fall 2013, 2014, 2015.
Evolution (BIO 3323): Upper-division, undergraduate. Guest lectures on the evolution of photo-sensing. Department of Biology, University of Texas at San Antonio. Spring 2009.
Neuroanatomy (BIO 5423): Graduate level. Guest lectures on the mammalian auditory system. Department of Biology, University of Texas at San Antonio. Fall 2004, 2005, 2006.
Systems Neuroscience (INTD 5043): Graduate level. Guest lectures on the mammalian auditory system. Neuroscience Program, University of Texas Health Science Center at San Antonio. Spring 2006, 2007; Fall 2007, 2008, 2009.
Modeling of Biological Systems (ECE 475): Graduate level. Guest lectures on auditory processing. Department of Electrical & Computer Engineering, University of Illinois, 2003.
Advanced Quantum Mechanics I: Graduate level, Teaching Assistant. Department of Physics, University of Illinois at Urbana-Champaign, 1992.
Physical Chemistry: Senior/Graduate level, Teaching Assistant. Department of Chemistry, University of Illinois at Urbana-Champaign, 1991.
Department / Division
Ethics and Compliance Officer, Advanced Digital Sciences Center, College of Engineering, University of Illinois at Urbana-Champaign and Singapore (2014 – 2017).
Neuroengineering Program (NSF-IGERT), Member, Executive committee, University of Illinois at Urbana-Champaign (2013 – 2015).
Neuroscience Program. Member, Doctoral studies committee, University of Texas at San Antonio (2006 – 2008).
Search Committee for Integrative Biologist at University of Texas at San Antonio. Member (2009).
Search Committee for Computational Biologist, University of Texas at San Antonio. Member (2006)
Specialized Neuroscience Research Program (SNRP). Member, Scientific Advisory Council, University of Texas at San Antonio (2008 - 2013).
Neurosciences Institute. Member, Steering committee, University of Texas at San Antonio (2009 - 2013).
San Antonio Neuroscience Alliance. Member of steering committee (2006 – 2008).
School / College
MS Biotechnology Program, University of Texas at San Antonio. Voluntary service (2005 – 2007).
Workshop on Brain-Computer Interfaces and Neuroengineering, Organizer and Session Chair, Advanced Digital Sciences Center, College of Engineering, Singapore (2017).
Healthy Aging Community, Member, Coordination Committee, College of Engineering, University of Illinois at Urbana-Champaign (2015 – 2017).
Workshop on EEG. Organizer, Coordinated Science Laboratory, College of Engineering. This is a widely-known workshop, entitled "Mini ERP Bootcamp", conducted by Dr. Steven J. Luck. Professor of Psychology, Center for Mind & Brain, University of California at Davis. Held in the University of Illinois at Urbana-Champaign (November 4-6, 2014).
Feasibility of a Multi-disciplinary Services Organization. Member, College of Science, University of Texas at San Antonio (2010 – 2011).
Undergraduate recruitment. Faculty coordinator, Ahmedabad University (2019 - date)
Ethics Committee. Member, Ahmedabad University (2019 – date).
Institutional Animal Care and Use Committee (IACUC). Member, University of Texas at San Antonio (2008 – 2013)
San Antonio Comparative Biology of Aging Center. Member from University of Texas at San Antonio (2008 – 2013).
UTSA - University of Illinois NSF (IGERT) Program in Neuroengineering. Establish a bridge between UTSA and the University of Illinois program. Through this bridge UTSA students receive training in the IGERT program at Illinois (2009 – 2013).
External service / National service
Computational Neuroscience Meeting, 2016 (CNS* 2016), Workshop on Methods of Information Theory in Computational Neuroscience. Session Chair for session on “Testing Coding Hypothesis” (2016).
National Science Foundation. Ad hoc proposal reviewer for Integrative Organismal Biology.
National Science Foundation. Member, review panels (Neural Systems Cluster) (2013 - 2014).
Acoustical Society of America. Member, Technical Program Organizing Committee (2009).
Acoustical Society of America. Organizer and Session Chair for “Animal bioacoustics: Natural Soundscapes (Session I)” and “Animal bioacoustics: Auditory Scene Analysis by Animals (Session II)” (2009).
Journal reviewing activities
Ad hoc reviewer for: 1) Journal of Neuroscience, 2) Journal of Theoretical Biology, 3) Brain, Behavior and Evolution, 4) Scientific Reports, 5) European Journal of Neuroscience, 6) Neurocomputing, 7) Journal of Comparative Psychology, 8) Journal of Comparative Physiology A, 9) Naturwissenschaften, 10) Journal of the Acoustical Society of America, 11) Behavioral Ecology and Sociobiology, 12) Bioacoustics, 13) IEEE Signal Processing Letters, 14) Lab Animal, 15) Computational Neuroscience Meeting, 16) BioMed Research International, 17) Ear and Hearing.
2019 – date Professor, School of Arts and Sciences, Ahmedabad University, Ahmedabad, India.
2019 – date Adjunct Professor, Coordinated Science Laboratory, College of Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
2019 – date Member, Board of Advisors, SigSenz Technologies Pvt. Ltd., India.
2018 – date Member, Board of Advisors, AIFARM Ltd., India.
2013 – 2019 Senior Scientist, Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, USA.
2013 – 2017 Principal Scientist, Advanced Digital Sciences Center, Illinois at Singapore, Singapore (joint appointment with College of Engineering, University of Illinois at Urbana-Champaign)
2012 – 2014 Consultant, Sonistic LLC, Champaign, IL, USA.
2011 – 2013 Core Faculty, Biomedical Engineering Program, University of Texas at San Antonio and University of Texas Health Science Center at San Antonio, USA.
2010 – 2013 Adjunct Scientist, Texas Biomedical Research Institute, San Antonio, USA.
2004 – 2013 Assistant Professor, Department of Biology and The Neurosciences Institute, University of Texas at San Antonio, San Antonio, USA.
2001 – 2004 Research Scientist, The Intelligent Hearing Aid Project, Beckman Institute, University of Illinois, Urbana-Champaign, USA.
1998 – 2001 Postdoctoral Research Associate, Department of Molecular & Integrative Physiology and Beckman Institute, University of Illinois, Urbana-Champaign, USA.
1990 – 1998 Graduate Research Assistant and doctoral candidate, Center for Biophysics & Computational Biology, University of Illinois, Urbana-Champaign, USA.
1988 – 1990 Scientist B, Process Design Group, Division of Chemical Engineering, National Chemical Laboratory, Pune, India.
1985 – 1987 Process Design Engineer, Process Heat Division, Thermax Ltd., Pune, India.
BLS, Division of Biological & Life Sciences,