Skip to main content

Abstract: Pawel Herman, Naresh Balaji Ravichandran, Anders Lansner

(contributed talk)

Cortex-like neural network architecture with local Bayesian-Hebbian learning for holistic pattern recognition

Pattern recognition capabilities of artificial neural networks have attracted a lot of attention in the machine learning and AI community. They have thus spurred on further intense developments in deep learning with growing focus on increasingly specialised architectures to serve concrete applications, often at the cost of flexibility, dependability on labelled data and extensive batch processing. To address the growing need for more holistic, multiple-purpose pattern recognition with largely unsupervised and incremental learning we advocate a brain-like neural network approach accounting for biologically inspired neural, synaptic and network mechanisms. In particular, we propose a cortex-like modular (columnar) network architecture with feedforward and recurrent processing pathways (Fig.1a), which can be mapped onto laminar structure of the brain’s cortical columns, equipped with Bayesian-Hebbian learning (Bayesian Confidence Propagating Neural Network, BCPNN) as well as structural plasticity that facilitates self-organisation of feedforward projections [1,2]. The recurrent connectivity in the superficial layer of our cortical model implements an attractor network circuit that enables associative memory functionality, considered as explanation for many cognitive phenomena. To increase the capacity of the attractor network and allow it to process more realistic highly correlated input, unlike random or orthogonal patterns typically exploited in associative memory simulations, we resort to learning representation capabilities of our sparse feedforward circuit in the granular layer. Importantly, the incremental process of extracting sparse distributed neural-like representations from the structured input stimuli is guided by the aforementioned unsupervised learning mechanisms: self-organising structural plasticity and local Bayesian-Hebbian synaptic tuning [1,2]. As a result, the synergistic combination of a recurrent attractor network with a sparse feedforward network that learns distributed representations in a non-spiking model exhibits promising potential for robustly handling different challenging recognition tasks such as prototype extraction, clustering and classification (with an extra read-out layer trained separately from the core networks described above) in varying distortion and noise scenarios (Fig.1b). The recurrent attractor memory network with BCPNN has also been shown to extract temporal correlations and process sequential input (Fig.1c). It is worth emphasising that the locality of the adopted learning rule, its incremental nature together with sparse connectivity and hidden-layer representations render the proposed cortex-like architecture particularly suitable for scaling, hardware implementation [3,4] and applications with multi-modal data streams as input.

Data Figure
Fig.1 A cortex-like modular neural network architecture (a) with Bayesian-Hebbian learning and self-organising structural plasticity for holistic pattern recognition, e.g. noisy, distorted, occluded etc. pattern reconstruction and identification, prototype extraction (b) and overlapping sequence disambiguation (c).

References

[1] Ravichandran NB, Lansner A, Herman P. Learning representations in Bayesian Confidence Propagation neural networks. In Proc. 20th IEEE IJCNN, 2020, Glasgow, UK. DOI: 10.1109/IJCNN48605.2020.9207061

[2] Ravichandran N. B., Lansner A., Herman P. Semi-supervised learning with Bayesian Confidence Propagation Neural Network. In Proc. ESANN 2021, Bruges, Belgium. DOI: 10.14428/esann/2021.ES2021-156

[3] Podobas A, Svedin M, Chien SW, Peng IB, Ravichandran NB, Herman P, Lansner A, Markidis S. StreamBrain: an HPC framework for brain-like neural networks on CPUs, GPUs and FPGAs. In Proc. 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies, 2021 Jun 21.

[4] Wang D, Xu J, Stathis D, Zhang L, Li F, Lansner A, Hemani A, Yang Y, Herman P, Zou Z. Mapping the BCPNN Learning Rule to a Memristor Model. Frontiers in Neuroscience. 2021;15.

Back to the Schedule