Spike-based computation and learning in distributed neuromorphic systems
For many practical tasks, modern computers cannot match the performance of biological systems. One of the reasons is that the architecture of nervous systems, in which billions of nerve cells communicate with action potentials (so called “spikes”) in parallel, is very different from that of today's computers. In this project we will investigate the properties of these types of neural architectures and model their computational strategies to develop alternative spike-based computing technologies.
Recently developed brain-inspired hardware architectures that emulate the biophysics of neurons and synapses in silicon represent a promising technology for implementing alternative computing paradigms, but methods that allow programming such platforms in a way that is as intuitive for humans as programming a traditional computer are still lacking. Our central goal in this project is to study and develop methods that allow the specification of a desired functionality in a simple mathematical form, and create tools that automatically transfer these programs onto neural networks running on distributed hardware systems. We will develop systems that interact in real-time with the world, and thus have to deal with similar unreliabilities as real nervous systems. This requires the development of spike-based learning mechanisms that adapt the network to the desired tasks, compensate for irregularities in the hardware, and improve the performance of the system over time. Our project will investigate how this can be achieved under the constraints imposed by neural mechanisms, and how this can be related to mathematical learning principles used in artificial intelligence.
One of the great challenges in computing is to make systems smarter, scalable, more reliable, and yet more energy-efficient. A prime candidate for achieving this are electronic systems that employ the distributed, asynchronous, event-driven, and adaptive way of computation that is characteristic for nervous systems. By developing configuration and learning tools, and deepening our understanding of biologically inspired learning and computation, our project will make this technology accessible for future applications in intelligent robots and mobile devices.