In this talk, I will first propose a tentative operative definition that enables determining whether any – artificial or biological – cognitive agent can be formally considered capable to carry out concept combination, and then show results of recent computational simulations illustrating how deep, brain-constrained networks trained with biologically grounded (Hebb-like) continual learning mechanisms exhibit the spontaneous emergence of internal circuits (cell assemblies) that naturally support superposition. Finally, I will try to identify some of the functional and architectural characteristics of such networks that facilitate the natural emergence of this feature, and which, in contrast, modern / classical deep NNs generally lack, concluding by suggesting possible directions for the development of future, better cognitive AI systems.
|
|