Workshop on Synthetic Neuroethology            

 

University of Sussex, Brighton, UK, 9-10 September 2010

 

 

Poster Abstracts

 

Reconstructing the visual input experienced by honeybees during flight
W. Stuerzl, L. Dittmar, N. Boeddeker and M. Egelhaaf
Department of Neurobiology and Center of Excellence "Cognitive Interaction Technology"
Bielefeld University, Germany


We employ our recently developed model of honeybee eyes [1] for reconstructing the visual
input bees experience during learning and return flights in a cylindrical arena [2,3]. The
model, that extends on the work of Seidl [4] and Giger [5], describes viewing directions and
acceptance angles of ommatidia over the full field of view of both eyes. The sensitivity
functions of individual ommatidia are modeled by radially symmetrical 2D-Gaussians with
spatially varying width. The model output is visualized in a planar arrangement of ommatidia
that resembles the hexagonal lattice on the real eyes.
We reconstruct the visual input to the eyes of a bee using the following steps: (1) Bees are
filmed with calibrated high-speed stereo cameras, and their 3D-trajectories and body or head
yaw angles are estimated from the resulting stereo sequences. (2) Six virtual perspective
cameras are moved along the estimated trajectories in a computer model of the experimental
environment. In order to capture the full viewing sphere, the virtual cameras have their optical
axes oriented orthogonal to the six faces of a cube while their nodal points coincide. (3) The
resulting six image sequences are then remapped according to our bee eye model.
References
[1] W. Stuerzl, N. Boeddeker, L. Dittmar, M. Egelhaaf. Mimicking honeybee eyes with a 280 degree FOV
catadioptric imaging system. Bioinspiration & Biomimetics 5 (2010).
[2] L. Dittmar, W. Stuerzl, E. Baird, N. Boeddeker, M. Egelhaaf. Goal seeking in honeybees: matching of
optic flow snapshots? J. Exp. Biol. 213 (2010).
[3] N. Boeddeker, L. Dittmar, W. Stuerzl, M. Egelhaaf. The fine structure of honeybee head and body yaw
movements in a homing task. Proc. R. Soc. B 277 (2010).
[4] R. Seidl. Die Sehfelder und Ommatidien-Divergenzwinkel von Arbeiterin, Koenigin und Drohn der
Honigbiene (Apis mellifica). PhD Thesis Technische Hochschule Darmstadt (1982).
[5] A.D. Giger. Honeybee vision: analysis of pattern orientation. PhD Thesis Australian National
University (1996).

----------------------------------------------------------------------

 

 

Applying a Swarm of Robots to a Garbage Collecting and Recycling Problem
Amani M. Benhalem and Patricia A. Vargas
ab354@hw.ac.uk, p.a.vargas@hw.ac.uk
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, UK


This work focuses on the application of Particle Swarm Optimization (PSO) (Clerc, 2006) to
a problem of garbage and recycling collection (Watanabe et al., 1989) using a swarm of
robots. Computational algorithms inspired by nature, such as Swarm Intelligence (Denby and
Hegarat-Mascleb, 2003; Kennedy and Eberhart, 2001), have been successfully applied to a
range of optimization problems. PSO is a technique derived from Swarm Intelligence and is
inspired by the collective behaviour of animals. Our idea is to train a number of robots to
behave and interact with each other, attempting to simulate the way a collective of animals
behave, i.e. as a single cognitive entity. What we have achieved is a swarm of robots that
interacts like a swarm of insects, cooperating with each other accurately and efficiently.
This poster illustrates an overview of the Particle Swarm Optimization algorithm and how we
used it to solve the problem of garbage and recycling collection (Watanabe et al., 1989). It
shows the four prototypes that we used. We also describe the two different swarm topologies
that we have implemented, showing the results obtained, a comparative evaluation, and an
explanation of how and why we thought of mixing them together would enhance the original
PSO algorithm. Finally, we highlight our achievements and we describe possible future
work.
Keywords: Biologically Inspired Computation, Swarm Intelligence, Particle Swarm
Optimization, Evolutionary Robotics, Optimisation Problems.

 

----------------------------------------------------------------------

 

A comparison between flocking behaviour in evolved agents and sheep flocks
Andrew Taylor, Ruth Aylett
School of Mathematics and Computer Science,

Patrick Green
School of Life Sciences,
Heriot-Watt University
Edinburgh


This research presents a new approach to the study of flocking behaviour within the natural world. Flocking behaviour has been modelled within software simulation and upon robotic platforms, examples of this include the schooling of fish (Tu et al, 1990) and the flocking of birds (Reynolds, 1987). There are several benefits to flocking behaviour as a strategy, including increases in foraging and hunting efficiency, and avoiding the effects of predation.

Within this research we contend that these models are inadequate to the task of validly reproducing flocking behaviour for three primary reasons a) they impose a global homogeneity of behaviour across all agents b) they are not evaluated against real world instances of flocking behaviour c) these models ignore the evolution of a population's behaviour time.

Within this research we attempt to a) to produce agents that engage in flocking behaviour and predator prey interaction b) to implement this model both within simulation and upon a robotics platform c) to compare this behaviour with real instances of flocking in the form of sheep flocks d) to model small groups of agents engaged in predator prey interactions and their behavioural evolution over time.


As the basis of the agent architecture we use asynchronous spiking neural networks evolved by use of a Genetic Algorithm using time to capture as the basis of the fitness function.


To validate this model we will compare its behaviour with sheep flocks using a variety of metrics such as complexity plots, mean social distance and the aggregation metrics proposed by Garnier et al (2004).
 

----------------------------------------------------------------------

 

Episodic memory for social interaction.
Paul Baxter, Rachel Wood and Tony Belpaeme
Centre for Robotics and Neural Systems, University of Plymouth.


ALIZ-E (Adaptive Strategies for Sustainable Long-term Social Interaction) is a
project aiming to explore the implementation of meaningful, temporally extended
social interactions between humans and robots. A core component of the systems
devised to supply this functionality will be episodic memory allowing the robot to
recall prior encounters with a user and to adapt its behaviour on the basis of previous
events. Episodic memory is generally implemented as a discrete component in current
artificial cognitive systems. In biological systems the functions of memory are
evidently more complex, and episodic memory can provide a useful example of the
benefits of a neuroethological perspective, where a behavioural definition has been
advanced (dispensing with the conscious recall requirement from human-based
research). A commitment to a developmental, memory-centred approach will allow
episodic memory to be considered as part of an integrated system. Episodic memory
can thus be fundamentally linked to the cognitive system as a whole, enabling it to
play a central part in the on-going development of interactive behaviour.

 

----------------------------------------------------------------------

 

Whisking with robots: The computational neuroethology of active vibrissal touch.

Tony Prescott, Martin Pearson, Ben Mitchinson, Sean Anderson, Jason Welsby, Tony Pipe
Sheffield University

UWE

Computational neuroethology seeks to provide insights into the
organization of behaviour by embedding brain models within robotic
systems that emulate key aspects of animal physiology. This approach is
particularly useful for investigating active sensing since artificial
transducers and actuators can place modelled neural circuits in a tight
coupling with the real world. The SCRATCHbot (Spatial Cognition and
Representation through Active TouCH) robot platform [refs 1-3] is being
developed as an embodied model of the rat whisker system, and to serve
as a test-bed for hypotheses about whisking pattern generation and
vibrissal signal processing.

To provide an effective model we emulate key physical elements of the
rat whisker system at a scale approximately 4x larger. The robot 'snout'
supports left and right 3x3 arrays of macro-vibrissae, with each
vibrissal column actuated using a separate, miniature motor. An array of
short, non-actuated whiskers at the snout tip emulates the
micro-vibrissae. Joints in the neck provide pitch, yaw and elevation
control, whilst the body is supported and moved by three independent
motor drive units. This configuration provides similar degrees of
freedom for head and whiskers positioning as are available to the rat.
The vibrissal shafts are made from a synthetic polymer with shape and
material properties similar to those of rat whiskers; each whisker base
is mounted in a Hall-effect transducer that accurately measures
deflection in two directions.

The robot control system incorporates models of sensorimotor loops at
multiple levels of the neuraxis. The behaviours we currently emulate
include anticipatory and feedback control of bilateral whisking pattern
generation [4, 5], cerebellar filtering of whisker signals to remove
movement-related artefacts [6] and an orienting response mediated by a
model superior colliculus using action arbitration by a model basal
ganglia [1, 2]. Sensory data obtained by exploring target surfaces with
the vibrissae is analyzed using biologically-inspired algorithms for
texture and shape recognition [3, 7].

[1] Pearson, M. J., Mitchinson, B., Welsby, J., Pipe, T., & Prescott, T.
J. (2010). SCRATCHbot: Active tactile sensing in a whiskered mobile
robot. Paper presented at the The 11th International Conference on
Simulation of Adaptive Behavior.

[2] Mitchinson, B., Pearson, M., Pipe, T., & Prescott, T. J. (In press).
Biomimetic robots as scientific models: A view from the whisker tip. In
J. Krichmar (Ed.), Neuromorphic and Brain-based Robots. Boston, MA: MIT
Press.

[3] Prescott, T. J., Pearson, M. J., Mitchinson, B., Sullivan, J. C. W.,
& Pipe, A. G. (2009). Whisking with robots: From rat vibrissae to
biomimetic technology for active touch. IEEE Robotics & Automation
Magazine, 16(3), 42-50.

[4] Grant, R. A., Mitchinson, B., Fox, C., & Prescott, T. J. (2009).
Active touch sensing in the rat: Anticipatory and regulatory control of
whisker movements during surface exploration. Journal of
Neurophysiology, 101(2), 862-874.

[5] Mitchinson, B., Martin, C. J., Grant, R. A., & Prescott, T. J.
(2007). Feedback control in active sensing: rat exploratory whisking is
modulated by environmental contact. Proc Biol Sci, 274(1613), 1035-1041.

[6] Anderson, S., Pearson, M., Pipe, T., Prescott, T. J., Dean, P., and
Porrill, J. (In Press). Adaptive cancellation of self-generated sensory
signals in a whisking robot. IEEE Transactions on Robotics.

[7] Fox, C., Mitchinson, B., Pearson, M. J., Pipe, A. G., & Prescott, T.
J. (2009). Contact type dependency of texture classification in a
whiskered mobile robot. Autonomous Robots, 26(4), 223-239

 

----------------------------------------------------------------------

 

Implementing a data mining approach to episodic memory modelling for artificial companions
Matthias Keysermann
School of Mathematical and Computer Sciences
Heriot-Watt University
mk231@hw.ac.uk
Supervisor: Dr. Patricia A. Vargas
P.A.Vargas@hw.ac.uk


In the context of living with robots or with artificial companions in general it be-
comes increasingly important to make Human Robot Interaction more natural.
As the human memory is involved in many neurologic and cognitive processes,
the integration of an artificial memory model is a central aspect for constructing
believable companions.
An artificial companion can be thought of as an everyday helper which does
not only remember specific dates and appointments, but is also able to make
intelligent suggestions according to the current situation. In order to do this
the artificial companion has to create a record of several events happening to a
certain person or in a limited environment.
We tried to model this part of the memory which neuroscientific researchers de-
scribe as the Episodic Memory (Baddeley 2004, Parkin 2006). Events are made
up of single attributes which are arranged in a hierarchical structure. Apart
from time and location these attributes include basic information like subject
and object of the current event as well as the action performed the by subject. A
privacy attribute allows to restrict information to specific people. Furthermore
emotions are included. As capturing all this information was not part of the
project, sample data has been created manually.
The task was the prediction of missing values which is required for the compan-
ion to make suggestions. Three variants of probabilistic classifiers have been
built and evaluated using the created events. These classifiers were a Naive
Bayes classifier (treating the attribute structure as being at), a Bayesian Net-
work (overcoming some shortcomings of the rst classifier) and a modified ver-
sion of the Naive Bayes classifier (capable of dealing with the hierarchical struc-
ture) (Witten & Frank 2005, Silla & Freitas 2009).
Incremental updates allow extending the models by adding new knowledge. Not
all events have to be stored { old events already incorporated in the model can
be deleted, i.e. only probabilities are kept. As probabilities change over time,
this method models some kind of forgetting process whose impact can be varied
by changing the weighting of old and new knowledge.
The poster will include information about the event structure and the sample
data, a brief description of the classifiers used and an overview of the different
evaluation scenarios and the corresponding results.

References
Baddeley, A. (2004), Your Memory: A User's Guide, new illustrated edn, Carl-
ton Books.
Parkin, A. J. (2006), Essential Cognitive Psychology, Psychology Press.
Silla, C. N. & Freitas, A. A. (2009), A global-model naive bayes approach to
the hierarchical prediction of protein functions, in W. Wang, H. Kargupta,
S. Ranka, P. Yu & X. Wu, eds, `Proceedings of 9th IEEE International
Conference on Data Mining (ICDM-2009)', IEEE Press, pp. 992{997.
URL: http://www.cs.kent.ac.uk/pubs/2009/2996
Witten, I. H. & Frank, E. (2005), Data Mining: Practical Machine Learning
Tools and Techniques, second edn, Morgan Kaufmann Publishers.
 

----------------------------------------------------------------------------------

 

Evolution of foraging behaviours of GRN controlled unicellular organisms in a simulated 2D and 3D environment
Authors: Michal Joachimczak (1), Joachim Erdei (2), Borys Wrobel (1,3)

We employ evolving artificial gene regulatory networks (GRNs) to
control simulated unicellular organisms (animats). GRNs in our model
are encoded in linear genomes which consist of elements corresponding
to transcription factors and regulatory regions (promoters). The
affinity between TFs and promoters determines the topology of the
network. Sensory information is provided to an animat as externally
driven concentrations of selected TFs. Concentrations of selected
internally produced TFs are interpreted as signals for actuators.
Animat behaviour is evaluated for the ability to obtain resources
('food') in an environment in which chemical gradients can be sensed
locally. Animat motion obeys simplified Newtonian rules in a fluid. We
demonstrate high evolvability of GRN-based controllers using a genetic
algorithm. We have evolved GRNs that control foraging behaviours in
2D and 3D with a single food source. We have also obtained GRNs that
allow the animats to forage for food and avoid poisonous substances.
We investigate the evolved behaviours and the properties of evolved
GRNs. Our work is a step towards creating a simulated ecosystem in
which multiple unicellular and multicellular individuals can compete
for limited resources.

(1) Evolutionary Systems Laboratory, Institute of Oceanology, Polish
Academy of Sciences
(2) Department of Algorithms and System Modelling, Technical
University of Gdansk, Poland
(3) Laboratory of Bioinformatics, Adam Mickiewicz University in Poznan, Poland

 

---------------------------------------------------------------------------------------------

 

Blowfly brain-machine interface: Performance of a proportional controller in a closed-loop visual stabilization task

Naveed Ejaz, Kristopher Peterson, Holger G Krapp
Department of Bioengineering, Imperial College London
nejaz@imperial.ac.uk


While walking or flying, flies depend heavily on visual feedback for efficient motor control. A stable gaze is an important strategy employed by the fly to maintain a stable flight attitude and avoid collisions. As a fly navigates around its environment, it experiences wide-field image motion - optic flow - across its visual field. Flies achieve gaze stabilization by analyzing optic flow information to minimize rotation-induced panoramic retinal image shifts. This closed-loop optomotor stabilization behavior has previously been shown to be highly adaptable with flies being able to close the sensory-motor loop using novel configurations, i.e. using their front legs differentially to stabilize motion of a panoramic pattern across its visual field. In such optomotor tasks, it has also been previously shown that the motor system receives visual input from the lobula plate tangential cells (LPTCs) which process optic flow parameters related to the self-motion of the fly.

We have developed a closed-loop brain-machine interface between a blowfly LPTC and a mobile robot platform that allows us to study the performance of different control laws that could account for optomotor control in the blowfly. In the first instance, an immobilized fly was placed in front of two computer monitors positioned at plus/minus 45 degrees azimuth relative to the fly's longitudinal body axis with each monitor subtending an area of 50 degrees azimuth and 38 degrees elevation. An H1 LPTC was identified in the blowfly visual system by its preference to back-to-front motion across its ipsilateral visual field. Extracellular spiking activity from the left H1 cell were recorded and filtered to obtain a smooth estimate for the spiking rate. A proportional controller (P-controller) used this spiking rate to estimate updated robot speed values at every time interval dt = 50ms. The mobile robot platform was mounted on a turntable that was surrounded by a vertically oriented square wave grating pattern (contrast ~ 100%, spatial wavelength = 11 degrees). A DC-shifted sinusoidal signal was used to control the speed of the turntable which was free to rotate in one direction along the horizontal plane. Optic flow generated as a result of the motion of the turntable and robot was captured via high-speed cameras mounted on the robot and transmitted to the monitors at 200 fps.

The turntable was tested with rotation frequencies from f = 0.03-3 Hz and the resulting robot speeds for each were recorded. Three values for the static feedback gain Kp (0.6, 1.0, 1.4) were probed for each set of turntable rotation frequencies. Bode gain and phase plots of the P-controller show a low-pass filter trend for the rotation frequencies tested with the system being stable for f < 3 Hz and approaching instability at f = 3 Hz. The performance of the P-controller with static gain was tested against an adaptive controller - P-controller with adaptive gain (integration time constant = 500ms). The bode gain plots for the P-controller (all static gains) are higher than the adaptive controller while the bode phase plots for both controller were similar. Current work involves paired recordings of both ipsilateral and contralateral H1 LPTCs and the performance of both proportional and adaptive algorithms under bi-directional control.
 

-----------------------------------------------------------------------------------------------------------

 

An efficient and robust model of the precedence effect.
Tom Goeckel, Gerhard Lakemeyer, Hermann Wagner

University of Aachen


Binaural sound source localization systems based on simple interaural time difference (ITD) models
have significant problems when confronted with echoes. These algorithms are unable to distinguish
between the actual sources and their reflection sites. Humans resolve this problem with mechanisms
that are subsumed under the term of precedence effect, which masks the directional information of the
reflection sites. Our goal was to find a model of the precedence effect that is both efficient and reliable
in reverberant environments. We based our algorithm on the idea of the interaural coherence (IC) value
as a reliability measure. IC value weighted ITD cues are accumulated over a time window of up to
100ms and a peak detector and time integration process determines the most reliable source directions.
Tests in simulated environments and on a mobile robot showed a good localization performance, even
under the influence of reverberances and up to a signal-to-noise ratio of -15dB for noise signals.
Echoes were, in nearly all of the test cases, not detected as distinct sound sources.

 

 

-------------------------------------------------------------------------------------------------------------

 

Competence Comparison of Collision Sensitive Visual Neural Systems in Dynamic Environments
Shigang YUE (1) and F. Claire RIND (2)

(1) Department of Computing and Informatics
University of Lincoln, Brayford Pool, Lincoln, LN6 7TS United Kingdom
(2) Ridley Building, School of Biology and Psychology
University of Newcastle upon Tyne, Newcastle upon Tyne, NE1 7RU United Kingdom
 

Abstract
Lobula giant motion detector (LGMD) and directional selective neurons (DSNs) are two types
of identified neurons found in the visual systems of insects such as locusts. A recent modelling
study showed that both the LGMD or a combination of DSNs could each be trained for collision
recognition in complex environments. However, the way the two types of specialized visual
neurons could be used together has not yet been investigated. To design reliable and robust
collision avoidance vision systems for autonomous robots or cars, it may be best to combine both
systems but what does each do best? To answer this it is necessary to compare the competence of
the LGMD and DSNs for collision recognition. Fortunately, evolutionary computation provides a
useful tool not only to compare the competence of the two collision recognition neural systems
but also to investigate the development of possible cooperation between the two different types of
neural system in specific environments. In this paper, we use a genetic algorithm to compare an
LGMD system with DSNs; both of the LGMD and DSNs are embodied in the complex visual
system of an agent. There were three different types of collision recognition neural subsystems
existing within each agent, each with a different type of neural organization for collision
recognition, i.e., an LGMD system using the LGMD neural system, a DSNs system using the
DSNs system and a hybrid system with a combination of the two subsystems. Within each agent, a
switch gene determines which of the three neural sub-systems plays the collision recognition role.
The agents with different functioning neural subsystems exist and evolve simultaneously in a
robotic environment. During the evolution, agents with each type of functioning neural subsystem
are evaluated according to their performances on collision recognition tasks. We found that,
after around 100 generations of evolution, the LGMD subsystem showed strong competence and
numerically dominated the whole population under our experimental conditions; although when
evolved in isolation, without competition from other types of neural networks, hybrid agents
could perform very well. Similar results were also obtained when these complex visual neural
systems (agents) evolved in driving scenarios. The experiments may suggest that, in the specific
environments, the LGMD is able to build up its ability for collision prediction quickly and
robustly therefore reduce the chance of other types of neural networks playing the same role. The
results also provide useful information to design novel artificial vision systems.
Keywords: visual motion, collision recognition, LGMD, direction selective, evolution, visual
neural systems, comparison

 

------------------------------------------------------------------------------------------------------------------------------------

Neurodynamics of a network of coupled phase oscillators with synaptic plasticity in minimally cognitive evolutionary robotics tasks

Renan Moioli

CCNR, University of Sussex

 

This work explores the neuronal synchronisation and phase information dynamics of an enhanced version of the widely used Kuramoto model  of phase interacting oscillators. The framework is applied to a simulated robotic agent engaged in minimally cognitive tasks. The first experiment is an active categorical perception task in which the robot has to discriminate between moving circles and squares. In the second  task, the robotic agent has to approach moving circles with both normal and inverted vision thus adapting to both scenarios. Concluding the work, the first experiment is extended by incorporating a mechanism inspired by the Hebbian theory of synaptic plasticity, with two aspects  being studied: the impact of different neural network temporal dynamics and the effect of different phase sensitivity functions on the  synchronisation patterns and phase dynamics observed in the neural network. The outcomes of this research contribute not only to uncover

the role of neuronal synchronisation and phase information in the generation of cognitive behaviours but also to the understanding of  oscillatory properties in neural networks.

 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------