Skip to main content

Abstract Thomas Nowotny

Training Spiking Neural Networks for keyword recognition with Eventprop in GeNN

Inspired by the superior energy efficiency of biological brains, spiking neural networks (SNN) are the target computational paradigm of most emerging neuromorphic hardware. However, training SNNs has remained difficult because gradient descent is problematic due to the non-differentiable nature of spikes. Recently, the Eventprop algorithm [1] was developed to calculate the exact gradient of a loss function on an SNN in a fully event-based manner. We have implemented Eventprop in the GeNN [2,3] framework and here, we discuss our insights from applying it to learning two speech recognition benchmarks. GeNN was originally developed for Computational Neuroscience research but its focus on flexibility and its efficient implementation of event-based communication on GPUs allowed us to also implement the Eventprop algorithm. The implementation is highly efficient and up to 12✕ faster than Back-Propagation-Through-Time using Norse [4]. With respect to accuracy of speech recognition, we found that after a minor extension of the formalism, Eventprop reliably calculates the gradient of many loss functions but learning success depends strongly on the choice of loss function and additional mechanisms from the machine learning toolbox. In the best case, we achieved close to state-of-the-art 93.5% +/- 0.7% test accuracy on the “Spiking Heidelberg Digits” benchmark [5] and 74.1% +/- 0.9% on “Spiking Speech Commands” [6]. Taken together, our results confirm the promise of Eventprop but also demonstrate that additional aspects need to be optimised, including loss function, regularisation, augmentation and network

References

[1] T.C. Wunderlich, C. Pehle, Sci. Rep. 11(1), p.12829, 2021
[2] E. Yavuz et al., Sci. Rep., 6(1), 18854, 2016
[3] J.C. Knight et al., Front. Neuroinf. 15, 2021
[4] C. Pehle, J.E. Pedersen https://github.com/norse/norse, 2021
[5] B. Cramer et al. IEEE TNNLS, 33(7), p.2744, 2020
[6] P. Warden, arXiv:1804.03209, 2018

Short Bio

Thomas Nowotny has a background in theoretical and mathematical physics, having completed a Diplom (MSc) in theoretical Physics at Georg-August Universität Göttingen and a PhD in theoretical Physics at Universität Leipzig. After his PhD he moved to the Institute for Nonlinear Science at University of California San Diego where he began research in Computational Neuroscience and bio-inspired AI. In 2007 he moved to University of Sussex where he is now a Professor of Informatics in the School of Engineering and Informatics, Head of the AI research group and co-director of the Sussex AI Centre. His research is focused around chemical sensing, in both animals and machines, GPU acceleration of Spiking Neural Network models, bio-mimetic robot controllers, hybrid computer-brain experimentation, bio-inspired AI and neuromorphic computing.