Skip to main content

CNS*2022 Workshop on Bio-inspired active AI

Introduction

Recent advances in AI and notably in deep learning, have proven incredibly successful in creating solutions to specific complex problems (e.g. beating the best human players at Go, and driving cars through cities). But as we learn more about these approaches, their limitations are becoming more apparent in terms of high energy consumption, fragility, and lack of transferability between tasks.

These limitations are particularly apparent when contrasted with naturally evolved intelligence. While no animals can play Go or drive cars, they are incredibly good at doing what they have evolved to do. For instance, ants learn how to forage effectively despite their tiny brains and minimal exploration of their world. We argue this difference comes about because natural intelligence is a property of closed-loop brain-body-environment interactions. Evolved innate behaviours in concert with specialised sensors and neural circuits extract and encode task-relevant information with maximal efficiency, aided by mechanisms of selective attention that focus learning on task-relevant features.

In this workshop, we will explore how we can learn from mini-brains, and computational models to better understand intelligence and pave the pathway to building better AI.

Schedule

Wednesday, 20 July 2022 , Melbourne Convention Centre
Room: TBD

Time (Melbourne local time) Speaker Title
9:00 - 9:10 James Knight & Thomas Nowotny Welcome
9:10-9:45 Trevor Murray Quantifying Australian bull ants’ navigational behaviour in complex environments
9:45-10:20 James Knight Insect-inspired robot navigation
10:20-10:40 Coffee Break
10:40-11:15 Katja Sporar Klinge Vision in dynamically changing environments
11:15-11:50 Yuri Ogawa Neural responses of hoverfly Target Selective Descending Neurons to reconstructed target pursuits
11:50- 13:30 Lunch Break
13:30-14:05 Karin Nordstrom Target detection in visual clutter
14:05-14:40 Andre van Schaik Neuromorphic Engineering Needs Closed-Loop Benchmarks
14:40-15:15 Dinis Gokaydin How do flies (and humans) detect temporal patterns?
15:15-15:35 Coffee Break
15:35-16:00 Paul Haider Bio-inspired AI and "AI-inspired biology"
16:00-16:25 Pawel Herman Cortex-like neural network architecture with local Bayesian-Hebbian learning for holistic pattern recognition
16:25-16:50 Srikanth Ramaswamy What can deep neural networks learn from neuromodulatory systems?
16:50-17:25 Luca Manneschi Efficient reservoir computing architectures
17:25-17:30 James Knight & Thomas Nowotny Closing remarks