Concept superposition and learning in standard and brain-constrained deep neural networks
Garagnani, M.. 2025. 'Concept superposition and learning in standard and brain-constrained deep neural networks'. In: 34th Annual Computational Neuroscience Meeting (CNS 2025), Workshop on "Brains and AI". Florence, Italy 9 July 2025. [Conference or Workshop Item] (Forthcoming)
No full text availableAbstract or Description
The ability to combine (or ‘‘superpose’’) multiple internal conceptual representations is a fundamental skill we constantly rely upon, crucial in complex tasks such as mental arithmetic, abstract reasoning, and language comprehension. As such, any artificial system aspiring to implement these aspects of general intelligence should be able to support this operation.
In this talk, I will first propose a tentative operative definition that enables determining whether any – artificial or biological – cognitive agent can be formally considered capable to carry out concept combination, and then show results of recent computational simulations illustrating how deep, brain-constrained networks trained with biologically grounded (Hebb-like) continual learning mechanisms exhibit the spontaneous emergence of internal circuits (cell assemblies) that naturally support superposition. Finally, I will try to identify some of the functional and architectural characteristics of such networks that facilitate the natural emergence of this feature, and which, in contrast, modern / classical deep NNs generally lack, concluding by suggesting possible directions for the development of future, better cognitive AI systems.
Item Type: |
Conference or Workshop Item (Talk) |
||||
Departments, Centres and Research Units: |
|||||
Dates: |
|
||||
Event Location: |
Florence, Italy |
||||
Date range: |
9 July 2025 |
||||
Item ID: |
38970 |
||||
Date Deposited: |
06 Jun 2025 09:00 |
||||
Last Modified: |
06 Jun 2025 09:00 |
||||
URI: |
![]() |
Edit Record (login required) |