

An efficient learning algorithm for direct training deep spiking neural networks. SuperSpike: supervised learning in multilayer spiking neural networks. Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. In Advances in Neural Information Processing Systems Vol. Slayer: Spike layer error reassignment in time. Convolutional networks for fast, energy-efficient neuromorphic computing. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. One-shot learning with spiking neural networks. A solution to the learning dilemma for recurrent networks of spiking neurons. Towards AI-complete question answering: a set of prerequisite toy tasks. In Advances in Neural Information Processing Systems (Ed. A simple neural network module for relational reasoning. In International Conference on Learning Representations (2018). Deep rewiring: training very sparse deep networks. MNIST Database of Handwritten Digits (ATT Labs, 2010) īellec, G., Kappel, D., Maass, W. Long short-term memory and learning-to-learn in networks of spiking neurons. Real-time computing without stable states: a new framework for neural computation based on perturbations. In International Conference on Neuromorphic Systems 2020 3 (Association for Computing Machinery, 2020) Long short-term memory spiking networks and their applications. Truenorth: design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 631–637 (IEEE, 2017).Īkopyan, F. A spike-based long short-term memory on a neurosynaptic processor. Loihi: a neuromorphic manycore processor with on-chip learning. Scholarpedia 9, 30643, revision 14332 (2014).Īllen Institute Brain Atlas: Cell Feature Search (Allen Institute, accessed 3 August 2021) ĭavies, M.

A universal model for spike-frequency adaptation. Advancing neuromorphic computing with Loihi: a survey of results and outlook. Furthermore, it provides the basis for an energy-efficient implementation of an important class of large DNNs that extract relations between words and sentences in order to answer questions about the text.ĭavies, M. This yields a highly energy-efficient approach to time-series classification. Filter approximation theory explains why after-hyperpolarizing neurons can emulate the function of long short-term memory units. After-hyperpolarizing currents can easily be implemented in neuromorphic hardware that supports multi-compartment neuron models, such as Intel’s Loihi chip. We show that a facet of many biological neurons, slow after-hyperpolarizing currents after each spike, provides an efficient solution. In particular, DNNs that solve sequence processing tasks typically employ long short-term memory units that are hard to emulate with few spikes. But this requires us to understand how DNNs can be emulated in an event-based sparse firing regime, as otherwise the energy advantage is lost. Spike-based neuromorphic hardware holds promise for more energy-efficient implementations of deep neural networks (DNNs) than standard hardware such as GPUs.
