Neuromorphic Auditory Sensors, Processing, Networks
I am interested in understanding the computational principles in nervous systems that allow robust computation even in the presence of noise and element differences; and using these principles in the development of hardware VLSI models of visual, auditory, and cortical processing. Our group designs neuromorphic mixed-signal VLSI auditory (AER-EAR) sensors which output asynchronous spike trains. We develop event-driven algorithms which compute on the outputs of the sensors. We also look at computation as carried out on cortical neurons such as the processing in the dendritic tree of a neuron. These computational models are implemented on multi-chip digital-analog VLSI systems and digital platforms that receive input from front-end sensors like the retinae and the cochlea and whose outputs are processed by spike-based neuronal networks. In the process, we develop
an understanding of some of the principles used in our brains for processing information.