Abstract: Luca Manneschi
Efficient reservoir computing architectures
Recurrent neural networks (RNNs) are powerful tools to process temporal dynamics. However, the standard approach used to train RNNs is backpropagation through time, which unrolls the chain of dependencies backward in time and can be computationally demanding. Reservoir computing (RC) can offer an alternative to this approach by exploiting the dynamics of a fixed dynamical system for computation. In traditional RC systems, a single layer of connections (the read-out) undergoes optimization. A well-known theoretical example of RC is an echo state network, which is considered faster to train, but also less powerful than recurrent neural networks optimized with backpropagation through time. In this talk, I will explore how we can enhance the computational abilities of echo state networks by taking inspiration from features observed in biological networks. In particular, the concept of sparse representations and of multiple timescales will be exploited to improve the ability of the considered systems to face tasks with complex temporal dependencies and to learn multiple tasks sequentially.
Short Bio
Luca studied Physics at the University of Padua and did a master's in Biophysics at the University of Rome. Then, he completed his PhD in Machine Learning at the University of Sheffield, where he is now an academic fellow (tenure track lecturer). His work is mainly in the fields of reservoir computing and reinforcement learning.
Back to the Schedule