Auto-adjoint method for gradient descent in spiking neural networks
Neuromorphic computing offers the promise of making neural networks more efficiently by incorporating more biological principles. For its success it is important to efficiently train spiking neural networks (SNNs) for machine learning. This can be achieved with stochastic gradient descent using exact gradients based on the adjoint method [2]. In 2021, Wunderlich and Pehle published the Eventprop algorithm [1] for doing so in LIF neurons with Exponential synapses.
|
|
In this talk I will give an introduction into this exciting area of bio-inspired AI and neurmorphic computing and discuss how the method can be generalised to a large class of neuron models. I will then show some results from using different neuron models on popular machine learning benchmarks for SNNs, obtained using the GeNN and mlGeNN framework [3,4].
The capability to support a wide class of models is akin to the auto-diff functionality of PyTorch, which has been instrumental in the recent AI revolution.
References
[1] Wunderlich, T. C., & Pehle, C. (2021). Scientific Reports, 11(1), 12829.
[2] Galán, S., Feehery, W. F., & Barton, P. I. (1999). Appl. Num. Math., 31(1), 17-47.
[3] Yavuz, E., Turner, J., & Nowotny, T. (2016). GeNN: a code generation framework for accelerated brain simulations. Scientific reports, 6(1), 18854.
[4] Turner, J. P., Knight, J. C., Subramanian, A., & Nowotny, T. (2022). mlGeNN: accelerating SNN inference using GPU-enabled neural networks. Neuromorphic Computing and Engineering, 2(2), 024002.
|
Workshop main page