Attractor Basins of Discrete Networks
Implications on self-organisation and memory
by
Andrew Wuensche
andy AT ddlab DOT org
for the degree of D.Phil at
THE UNIVERSITY OF SUSSEX
School of Cognitive and Computing Sciences
October 1996, Revised Feb 1997
The thesis consists of 239 pages with 71 figures + appendices figures.
You can download a scanned
pdf -- 38.9M.
I believe the university
charges about 10 pounds + postage.
for a hard copy of the complete thesis - please contact:
Celia McInnes, Librarian
School of Cognitive and Computing Sciences
University of Sussex, Falmer, Brighton BN1 9QH, UK
tel: +44 1273 606755, ext. 2405
FAX: +44 1273 671320
email: celiam@cogs.susx.ac.uk
Only the abstract, contents, overview and selected references
are included below:
See also
Discrete Dynamics Lab
Abstract
New tools are available for reconstructing the attractor basins of discrete
dynamical networks where state-space is linked according the network's dynamics.
In this thesis the computer software
"Discrete Dynamics Lab"
is applied to
examine simple networks ranging from cellular automata (CA) to random Boolean
networks (RBN), that have been widely applied as idealised models of physical
and biological systems, to search for general principles underlying their
dynamics. The algorithms and methods for generating pre-images for both CA
and RBN, and reconstructing and representing attractor basins are described,
and also considered in the mathematical context of random directed graphs.
RBN and CA provide contrasting notions of self-organisation. RBN
provide models of hierarchical categorisation in biology, for example memory
in neural and genomic networks. CA provide models at the lower level of
emergent complex pattern. New measures and results are presented on CA
attractor basins and how they relate to measures on local dynamics and
the Z parameter, characterising ordered to "complex" to chaotic behaviour.
A method is described for classifying CA rules by an entropy-variance measure
which allows glider rules and related complex rules to be found automatically
giving a virtually unlimited sample for further study.
The dynamics of RBN and intermediate network architectures are examined
in the context of memory, where categorisation occurs at the roots of subtrees
as well as at attractors. Learning algorithms are proposed for "sculpting" the
basin of attraction field. RBN are proposed as a possible neural network model,
and also discussed as a model of genomic regulatory networks, where cell types
have been explained as attractors.
Contents
1. Introduction
1.1. Discrete Dynamical Networks
1.2. Random Boolean Networks
1.3. Cellular Automata
1.4. Symmetries in Cellular Automata
1.5. Emergence in Cellular Automata
1.6. Cellular Automata Local and Global Measures
1.7. Cellular Automata parameters
1.8. Emergence in Random Boolean Networks
1.9. Random Boolean Networks Local and Global measures
1.10. Random Boolean Network Parameters
1.11. Learning and the inverse problem
2. Basins of Attraction
2.1. Introduction
2.2. Computing Pre-Images and Attractor Basins
2.3. Constructing Attractor Basins
2.4. Portraying Attractor Basins
3.
Attractor Basins of Random Maps
3.1. Introduction
3.2. Random Graphs
3.3. Random Maps
3.4. Basins of Attraction of Random Maps
3.5. RBN and CA in the context of Random Maps
3.6. The Random Map Reverse Algorith applied to CA and RBN
3.1. Conclusion
4.
Self-Organisation in One-D Cellular Automata
4.1. Introduction
4.2. One-Dimensional CA architecture
4.3. Space-time patterns and basins of attraction
4.4. Computing pre-images
4.5. Constructing and portraying CA attractor basins
4.6. Complex rules and gliders
4.7. Glider interactions and basins of attraction
4.8. Parameters and measures
4.9. Neighbourhood lookup frequency and entropy
4.9.1. Ordered Dynamics
4.9.2. Chaotic Dynamics
4.9.3. Complex Dynamics
4.10. Classifying rules by entropy-density
4.11. Automatically classifying rule-space by input-entropy variance
4.12. The Z-parameter
4.13. Calculating the Z-parameter
4.14. Relating the lambda parameter and Z
4.15. Attractor Basin Topology; G-density and In-Degree frequency
4.15.1. G-density
4.15.2.In-Degree frequency
4.16. G-density - variation with system size
4.17. G-density and the Z-parameter
4.18. Summary and Discussion
5.
Memory in Random Boolean Networks
5.1. Introduction
5.2. Random Boolean Networks
5.3. Random Boolean Network Architecture
5.4. Space-time patterns and basins of attraction
5.5. Computing RBN pre-images
5.6. Constructing and Portraying RBN Attractor Basins
5.7. Intermediate architecture between CA and RBN
5.7.1. Biased random Wiring
5.7.2. Biased random rule mix
5.8. Comparing CA and RBN
5.9. RBN Learning Algorithms
5.9.1. Learning by re-wiring
5.9.2. Learning by mutating the rule scheme
5.9.3. Sculpting the basin of attraction field and the inverse problem
5.10. The Emergence of Memory
5.10.1. Memory, far from equilibrium
5.10.2. RBN as a neural model
5.11. Genomic regulatory networks
5.11.1. RBN as models of Genomic regulatory networks
5.11.2. Global and Local Measures
5.11.3. Attractor basins
5.11.4. Frozen Regions and Frozen Skeletons
5.11.5. The Derrida plot
5.11.6. Damage spread
5.11.7. Input entropy
5.11.8. RBN parameters as applied to genomic regulatory networks
5.11.9. Network wiring k, and size n
5.11.10. The P parameter
5.11.11. Canalising inputs
5.11.12. Preliminary results
5.12 Memory in RBN - Conclusion
6. Conclusions and open questions
References
Appendices
1. Garden-of-Eden density/ System size graphs
K=3 rules and K=5 totalistic rules
2. Sample of glider rules, k=5.
Space-time patterns and the basin of attraction field
3. Sample of glider rules, k=5, 6, 7.
Space-time patterns, neighbourhood lookup frequency and entropy
4. Examples of subtrees for large lattice sizes n=20 to 150
for a k=5 rule glider
5. Automatic Rule Sample
5.1. k=5 rules
5.2. k=6 rules
5.3. k=7 rules
6. Reverse Algorithm and Z parameter Code
6.1. Random Maps
6.2. Cellular Automata
6.3. Random Boolean Networks
6.4. Z parameter
7.
Discrete Dynamics Lab
7.2. DDlab Manual, introductory sections only
7.3. Network size limitations
Overview
Discrete Dynamical Networks are central to our current perception of a wide
range of natural and artificial phenomena drawn from many areas of science;
from physics to biology to cognition; to social and economic organisation;
to parallel computation and Artificial Life.
It can be argued that the dynamics of non-equilibrium parallel
processing systems composed of discrete interacting components create
and maintain much of the complex biological phenomena that dominate our
world. Various types of discrete dynamical networks are increasingly being
applied as idealised models of physical and biological phenomena. For this
reason, and also because of their intrinsic interest as mathematical/physical
systems, finding general principles underlying the dynamics of the idealised
networks themselves has become an important task. Their behaviour is difficult
to describe by classical mathematics using its tools of partial differential
equations and continuous dynamics.
This thesis applies computer simulation tools developed by the author,
known as
"Discrete Dynamics Lab" (DDLab) and available on the Internet
(Wuensche 1996), to examine a range of simple discrete dynamical networks
ranging from cellular automata (CA) to random Boolean networks (RBN). In
particular local dynamics is placed in the context of the global dynamics
of the network represented by its basin of attraction field or fragment
thereof. The terminology is borrowed from the notion of the "phase portrait"
which proved so powerful in continuous dynamics. Constructing and visualising
sub-trees, basins of attraction and the entire basin of attraction field
of discrete dynamical networks (referred to collectively as attractor basins),
and relating their characteristics to local dynamical trajectories was
introduced in the author's book
"The Global Dynamics of Cellular Automata"
(Wuensche and Lesser 1992a). This thesis extends the investigation into
CA and embarks into new areas involving RBN. New results of computer
experiments are presented, some of which are preliminary based on work
in progress, and lines of future enquiry are described. The thesis also
allows itself some interpretations, conjectures and speculations relating
to the results, and how these might provide insights into self-organisation
and memory. It is hoped that the distinction between hard results and more
general discussion will be clear.
The capability of constructing attractor basins depends on reverse
algorithms invented by the author for directly computing the
predecessors (known as pre-images) of network states. Different reverse
algorithms apply to CA and RBN, though the RBN method encompasses
CA.
These methods are in general orders of magnitude faster than the brute
force method, constructing an exhaustive map resulting from network
dynamics. This exhaustive method rapidly becomes computationally
intractable with increasing network size so is limited to small
systems. It applies to all network types and also allows the attractor
basins of random maps to be constructed. These three independent methods
together form a valuable reality check on the correctness of the
computed pre-images and attractor basins.
The CA reverse algorithm forms the basis for a trajectory convergence
parameter, named the Z parameter (Wuensche and Lesser 1992a, Wuensche
1994a), a sort of inverse Liaponov exponent, which predicts the
bushiness of sub-trees in attractor basins. Measures of bushiness are
captured by "garden-of-Eden" density and more generally by the
distribution of in-degree. It is shown that these measures correlate
with the quality of dynamics, ordered to chaotic, where order turns out
to be the most bushy, chaos the least. A histogram of the in-degree
distribution shows contrasting profiles for order and chaos, for complex
rules the distribution seems to follow a power law.
Whereas RBN provide models in biology because of their non-local
connections and heterogeneous rules, CA provide models in physics with
its notions of continuous space and universal laws. Contrasting notions
of self-organisation are apparent between the two systems. RBN provide
models of hierarchical categorisation in biology, for example memory in
neural, genomic, and immune networks. CA provide models of
self-organisation at the lower level of emergent interacting space-time
configurations, particles or "gliders", metaphors for the emergence of
life from the stuff of inanimate physics by a process of
self-organisation.
In general, coherent space-time configurations cannot emerge in RBN
because of their non-locality. On the other hand, a CA's ability to
flexibly categorise distributed patterns is subject to severe symmetry
constraints. Different and contrasting "edge-of-chaos", or phase
transition, notions are seen to apply to CA and RBN. In the case of CA,
this is the phase in rule-space where glider behaviour might emerge,
quantified by rule parameters such as the lambda parameter (Langton
1990) and the author's Z parameter, measures on the dynamics such as the
"input-entropy" and its variance, and by measures on the topology of
attractor basins and sub-trees, such as the distribution of in-degree
(Wuensche 1994a).
In RBN, as applied to gene regulatory networks (Kauffman 1969, Somogyi
and Sniegoski 1996), the phase transition
represents the phase in the wiring/rule setup where categorisation of
patterns of gene activation, the network's memory, is sufficiently
versatile for adaptive behaviour but short of chaotic to ensure reliable
behaviour. Tuning the degree of "canalisation" at the level of rules
(Kauffman 1984, Harris et al 1997) moves behaviour across the
transition, though the characteristics of network connections also plays
a role. A measure at the level of dynamics is given by the Derrida plot
(Derrida and Stauffer 1986), analogous to the Liaponov exponent in
continuous dynamics. There are further measures such as damage
spreading, the percolation of frozen regions, input-entropy over time
relating to single genes, and the transient/frozen skeleton/attractor
characteristics.
The body of the thesis goes on to discuss these ideas in depth, based on
published work by the author and work in progress. The introductory
chapter describes and defines simple discrete dynamical networks ranging
from CA to RBN and explains how network dynamics can be understood
globally in terms of attractor basins. Comparisons are made with
attractor basins in classical continuous dynamics. Previous work and
applications in the field are discussed. Ideas of emergence, phase
transitions and various measures and parameters relating to CA and RBN
dynamics are introduced.
Chapter 2 looks more closely at the methods for constructing and
portraying attractor basins.
Chapter 3 considers discrete
dynamical networks and their attractor basins in the mathematical
context of directed maps in random graph theory. Discrete dynamical
networks are a subclass of random maps. Attractor basins of random maps,
possibly incorporating some bias in the random mapping, are amenable to
probabilistic analysis, which can be checked against a numerical
reconstruction using
DDLab. This is work in progress in collaboration
with Christian Reidys (Reidys and Wuensche 1997).
Chapter 4 focuses on CA and develops ideas first presented in the paper
"Complexity in One-D Cellular Automata" (Wuensche 1994a) and subsequent
work. The reverse algorithm for generating pre-images, and the Z
parameter which derives from this procedure, are reviewed. The chapter
looks in particular at new measures and results on attractor basins and
how they relate to measures on local dynamics, characterising ordered to
"complex" to chaotic behaviour, where complex behaviour corresponds to
Wolfram's class 4 (Wolfram 1984a). A method is described for
classifying CA rules by a measure of the variance of input-entropy over
time. The method allows glider rules that support coherent interacting
configurations and related complex rules to be found automatically
giving a virtually unlimited sample for further study. Glider dynamics
in CA is of prime interest as an example of self-organisation and
emergence in simple systems, where all aspects of the system can be
fully defined.
Chapters 5 focus on RBN, and develops ideas first presented in the
papers "The Ghost in the Machine" (Wuensche 1993a) and "The Emergence of
Memory" (Wuensche 1994b). Recent collaborative work on RBN as models of
genomic regulatory networks is also described.
The reverse algorithm for generating pre-images in RBN is explained.
The dynamics of RBN and intermediate network architectures are examined, in
particular in the context of memory. The notion of memory far from equilibrium,
categorisation by the roots of subtrees as well as by attractors is proposed.
Algorithms for learning by "sculpting" the basin of attraction field are
described. The inverse problem of finding the set of minimal RBN that will
satisfy a predetermined set of transitions is discussed. Alternative methods
for solving the inverse problem have recently been formulated independently
by Manor Askenazi
(1996) and John Myers (1996). The biological implications
of memory far from equilibrium, and RBN or networks of RBN as the basis of a
neural network model is discussed.
Chapter 5 goes on to discuss RBN as applied to model genomic
regulatory networks, where cell types are explained as attractors or
"frozen skeletons" in the dynamics of the network.
DDLab
is being used
as a simulation platform for these models, and to extract various measures
distinguishing dynamics between order and chaos. The results and methods
are described. This is work in progress in collaboration with Stuart Kauffman
and others. (Harris et al 1997, Somogyi
and Sniegoski 1996).
The thesis concludes with an overview of future work and open questions.
Selected References
Askenazi,M., (1996) GeneTool software.
Derrida,B., and D.Stauffer, (1986) "Phase transitions in
Two-Dimensional Kauffman Cellular Automata Random Network
Automata", Europhys.Lett. 2, 739.
Harris,S., A.Wuensche, S.Kauffman and B.Sawhill, (1997, paper in
progress, provisional title) "Are Eukaryotic Cells at the Edge
of Chaos".
Kauffman,S.A., (1969) "Metabolic stability and epigenisis in
randomly constructed genetic nets", Journal of Theoretical
Biology, 22, 437-467.
Kauffman,S.A., (1984) "Emergent properties in random complex
systems", Physica 10D, 146-156.
Langton,C.G., (1990) "Computation at the Edge of Chaos: Phase
Transitions and Emergent Computation", Physica D 42, 12-37.
Myers,J.E., (1997) "Random Boolean Networks - Three Recent
Results", to be appear in Complexity.
Somogyi,R., and C.Sniegoski, (1996) "Modeling the Complexity of
Genetic Networks: Understanding Multigenetic and Pleiotropic
Regulation", Complexity,Vol.1/No.6,45-63.
Wolfram,S., (1984a) "Universality and complexity in cellular
automata", Physica D, vol 10D, 1-35.
Wuensche,A., and M.J.Lesser. (1992a)
"The Global Dynamics of Cellular Automata:
An Atlas of Basin of Attraction Fields of
One-Dimensional Cellular Automata", Santa Fe Institute Studies
in the Sciences of Complexity, Addison-Wesley.
Wuensche,A. (1993a) "The Ghost in the Machine; Basin of Attraction
Fields of Random Boolean Networks", CSRP 281, University of
Sussex, published in Artificial Life III (1994), ed C.G.Langton,
Santa Fe Institute Studies in the Sciences of Complexity,
Addison-Wesley.
Wuensche,A., (1994a) "Complexity in One-D Cellular Automata;
Gliders, Basins of Attraction and the Z Parameter", Cognitive
Science Research Paper 321, Univ. of Sussex.
Wuensche,A., (1994b) "The Emergence of Memory", Cognitive
Science Research Paper 346, Univ. of Sussex, published in
Towards a Science of Consciousness, (1996), eds. S.R.Hameroff,
A.W.Kaszniak, A.C.Scott, MIT Press.
Wuensche,A.,(1996)
"Discrete Dynamics Lab", interactive
graphics software for investigating discrete dynamical networks
including their attractor basins.
DDLab home page: www.santafe.edu/~wuensch/ddlab.html