Discrete Dynamics Lab

About DDLab
- from chapters 1 and 2 of the new manual -

Contents


Introduction

DDLab is an interactive graphics program for researching the dynamics of finite binary networks, from Cellular Automata to random Boolean networks and beyond, including their attractor basins. The program is relevant to the study of complexity, emergent phenomena, neural computation and aspects of theoretical biology such as modeling gene regulatory networks, and to the study of discrete dynamical networks in general in a variety of fields.

Networks can be created with any architecture from cellular automata to random Boolean networks. Cellular automata have a homogeneous rule and a nearest neighbor connections, whereas random Boolean networks have arbitrary connections, and rules which may be different at each site. The results in references [Wuensche 1992-2000] may be implemented with DDLab.

Source code and platforms

The DDLab source code is written in C. The program is maintained for the following platforms: DOS/PC, UNIX/XWindows-Sun OS 4.1 and Solaris, Linux, and Irix SGI. DDLab has been ported to hpux/X11/HP by users. This manual covers all versions, which function in essentially the same way, though aspects of the graphics presentation might be slightly different.

Discrete dynamical networks

A discrete dynamical network in DDLab can be imagined as a software simulation of a collection light bulbs which transmit information to each other about their on/off state, and turn on or off according to the arriving signals. In more abstract terminology, the network is made up of elements or ``cells'', connected to each other by directed links or ``wires'', where a wire has an input and output terminal. A cell takes on a value of either 0 or 1, and transmits this value down its output wires. Its value is updated as a function of the values on its input wires. Updating is usually done in parallel, in discrete ``time-steps'', but may also be sequential in a predetermined order.

This is the system in a nutshell. It remains to set up the network according to the various parameters,

DDlab is able to create these networks, and graphically represent and analyze both the networks themselves and the dynamics resulting from the changing patterns of 0s and 1s as the complex feedback web unfolds.

The networks, whether cellular automata or random Boolean networks, may be set up in one, two or three dimensions, with periodic boundary conditions. The neighborhood (or ``pseudo-neighborhood'') size, k, may vary from 0 to 13, and the network may have a mix of k sizes. Network wiring can also be set up to correspond to a regular n-dimensional hypercube, where k=log2(n). Network updating may be sequential as well as parallel, noisy as well as deterministic.

``Cellular automata'' is henceforth abbreviated to ``CA'', and ``random Boolean networks'' to ``RBN''.

Space-time patterns and attractor basins

DDLab has two alternative ways of looking at network dynamics. Local dynamics, running the network forwards, and global dynamics, which entails running the network backwards.

Running forwards (local dynamics) generates the network's space-time patterns from a given initial state. Many alternative graphical representations (and methods for gathering/analyzing data) of space-time patterns are available to illustrate different aspects of local network dynamics, including ``filtering'' to show up emergent structures more clearly.

Running ``backwards'' (global dynamics) generates multiple predecessors rather than a trajectory of unique successors. This procedure reconstructs the branching sub-tree of ancestor patterns rooted on a particular state. States without predecessors may be disclosed, the so called ``garden-of-Eden'' states, the ``leaves'' of the sub-tree. Sub-trees, basins of attraction (with a topology of trees rooted on attractor cycles), or the entire basin of attraction\index{basin of attraction} field (referred to collectively as ``attractor basins'') can be displayed as directed graphs in real time, with many presentation options, and methods for gathering/analyzing data. The attractor basins of ``random maps'' may be generated, with or without some bias in the mapping.

Categorization

It can be argued that attractor basins represent the network's ``memory'' by their hierarchical categorization of state-space. Each basin is categorized by its attractor and each sub-tree by its root. Learning/forgetting algorithms allow attaching/detaching sets of states as predecessors of a given state by automatically mutating rules or changing connections. This allows sculpting the basin of attraction field to approach a desired scheme of hierarchical categorization. More generally, preliminary ``inverse problem'' or ``reverse engineering'' algorithms are included to find the network that satisfies a given set of transitions.

Network size limitations

Whereas large networks may be run forward to look at space-time patterns, or backward to look at subtrees, the size is limited when generating the entire basin of attraction field, given that state-space grows exponentially with system size $n$. The program's upper limit for basin of attraction fields is n=31 (lower in practice). Otherwise the limit is n=65025, based on the maximum size of a square 2d network, 255x255, where $n$ can be represented by an unsigned short int. This applies to space-time patterns (in 1d, 2d and 3d), and to single basins or sub-trees, though in practice much smaller sizes are appropriate for attractor basins. Larger sizes may be tried, but may impose unacceptable time, memory or display constraints.

Networks with disordered or ``chaotic'' dynamics, with low in-degree or branchiness in their subtrees, allow backwards computation of much larger networks than for ordered dynamics which have high in-degree. For CA, rules giving chaotic dynamics have a high Z parameter, rules giving ordered dynamics have a low Z parameter [Wuensche 1999].

For RBN, running backwards generally imposes a greater computational load than for CA.

Parameters and options

DDLab is an applications program, it does not require writing code. The network's parameters, and the graphics display and presentation, can be very flexibly set, reviewed and altered from DDLab's graphical user interface. Changes can be made on-the-fly, including changes to rules, connections, current state, scale, and alternative presentations highlighting different properties of space-time patterns. Networks of whatever dimension can be interchangeably represented in 1d, 2d, and 3d, as well as in a 3d isometric projection. The latter is especially useful for showing the space-time patterns of 2d networks such as Conway's ``game-of-Life'' . [Conway 1982] The network architecture, states, data, and the screen image can be saved and loaded in a variety of tailor-made file formats.

Measures and data

Various quantitative, statistical and analytical measures and data on both forward dynamics and attractor basin topology are available in DDLab, as well as various global parameters for rules and network architecture. The measures and data, shown graphically as well as numerically, include the following:

Summary of DDLab functions

The following is descriptive summary of DDLab's functions. For detailed information refer to the new DDLab manual.

DDLab's graphical user interface

DDLab's graphical user interface allows setting, viewing and amending network parameters by responding to prompts or accepting defaults. The prompts present themselves in a main sequence and also in a number of context dependent prompt windows. You can backtrack to previous prompts in the sequence, and in some cases skip forward. A flashing cursor indicates the current prompt. ``Return'' (or the left mouse button) steps forward through the prompts, ``q'' (or the right mouse button) backtracks, or interrupts a run.

The neighborhood k or k-mix

The size of the ``neighborhood'', the number of inputs each cell receives, k, can vary from 0 to 13. k can be homogenious, or there can be a mix of k-values in the network. The k-mix may be set and modified in a variaty of ways, including defining the proportions of different k's to be allocated at random in the network. A k-mix may be saved/loaded from a file, but is also implicit in the wiring scheme.

Wiring

The network's wiring scheme (i.e. its connections) has default settings for regular CA (for 1d, 2d and 3d) for each neighborhood size, k=0 to 13 (see chapter \ref{The local neighborhood, and network geometry}). Wiring can also be set at random (non-local wiring), with a wide variety of constraints and biases, or by hand. A wiring scheme can be set and amended just for a predefined sub-network within the network, and may be saved/loaded from a file.

In most cases regular 2d wiring defines a square grid on the torus, and includes the von Neumann and Moore neighborhoods of 5 and 9, cells. However the 6 and 7 cell regular 2d neighborhood is wired to define a triangular grid. Regular 3d wiring defines a cubic grid with periodic boundary conditions.

Non-local wiring can be constrained in various ways, including confinement within a local patch of cells with a set diameter in 1d, 2d and 3d. Part of the network only can be designated to accept a particular type of wiring scheme, for example rows in 2d and layers in 3d. The wiring can be biased to connect designated rows or layers.

The network parameters can be displayed and amended in a 1d, 2d or 3d graphic format, in a ``spread sheet'', or as a network graph which can be rearranged in various ways, including dragging nodes with the mouse.

Rules

A network may have one homogenious rule as for CA, or a rule-mix as for RBN. The rule-mix can be confined to a subset of (selected) rules. Rules (and totalistic codes) may be set and modified in a wide variety of ways, in decimal, hex, as a rule-table bit pattern (using the mouse), at random or loaded from a file. The ``game-of-Life'', ``majority'', and other predefined rules or rule biases can be selected.

A rule scheme can be set and amended just for a predefined sub-network within the network, and may be saved/loaded from a file.

Rules may be changed into their equivalents (by reflection and negative transformations), and transformed into equivalent rules with larger or smaller neighborhoods. Rules transformed to larger neighborhoods are useful to achieve finer mutations. Rule parameters lambda and Z, and the frequency of canalizing inputs in a network can be set to any arbitrary level.

Initial network state, seed

An initial network state (the seed) is required to run the network forward and generate space-time patterns. A seed is also required to run backwards to generate a sub-tree or single basin. A basin of attraction field does not require a seed. As in setting a rule, there are a wide variaty of methods for defining the seed, in decimal or hex, as a bit pattern in 1d, 2d or 3d, at random (with various constraints or biases), or loaded from a file. The bit pattern method is a mini drawing program, using the mouse (or keyboard) to set cell values (0,1), particularly useful for 2d and 3d.

Networks of sub-networks

Its possible to create a system of independent or weakly coupled sub-networks within the base network, either directly, or by saving smaller networks to a file, then loading them at appropriate positions in the base network. Thus a 2d network can be tiled with sub-networks, and 1d, 2d or 3d sub-networks can be inserted into a 3d base network. The parameters of the sub-networks can be totally different from the base network, provided the base network is set up appropriately, with sufficent ``memory'' to accomadate the sub-network. For example, to load an RBN into a CA, the CA may need be set up as if it were an RBN. To load a mixed-k sub-network into single-kbase network, k in the base network needs to be at least as big as the biggest k in the sub-network. Options are available to easily set up networks in this way. Once loaded, the wiring can be fine-tuned to interconnect the sub-networks. A network can be automatically duplicated to create a total network made up of two identical sub-networks. This is useful to see the difference pattern (or damage spread) between two networks from similar initial states.

Presentation options

Many options are provided for the presentation of attractor basins and space-time patterns. Again, many of these settings can be changed ``on the fly''.

Graphics

In DOS the graphics will start up at a resolution of 640x480 (VGA) but can be started at a higher resolution with a program parameter, or the resolution can be changed later.

In the Unix/Linux version the DDLab window starts up at 1024x768 or a size that is automaticaly set to comfortably fit on the monitor, and can be resized, moved and iconised in the usual way.

The default background color is black, but can be reset to white either from within the program or with a program parameter.

The text size and spacing is set automatically according to the screen resolution, but can be resized.

Filing and Printing

DDLab allows filing a wide range of filetypes, including network parameters, data, and the screen image. For compatability with DOS, filenames follow the DOS format, so a file name has up to eight characters (starting with a letter) plus a 3 character extension. For example "myfile25.dat". In DDLab, only the first part of the filename is selected without the extention (or the default filename provided is accepted), the extension is set automatically to identify the file type.

Mutations

A wide variaty of network ``mutations'', as well as changes in presentation, can be be made, many on-the-fly while the simulation is running.

Quantitative, statistical and analytical measures

Some of the measures and data on network dynamics available in DDLab are listed below. In most cases this information can be displayed graphicaly.

Reverse algorithms

There are three different reverse algorithms for generating the pre-images of a network state.

The first two reverse algorithms generate the pre-images of a state directly; the speed of computation decreases with both neighborhood size k, and network size. The speed of the third exhaustive algorithm is largely independent of k, but is especially sensitive to network size.

The method used to generate pre-images will be chosen automatically, but can be overridden. For example, a regular 1d CA can be made to use either of the two other algorithms for benchmark purposes and for a reality check that all methods agree. The time taken to generate attractor basins is displayed in DDLab. For the basin of attraction field a progress bar indicates the proportion of states in state-space used up so far.

Random map

The random mapping routine creates a list of ``exhaustive pairs'', assigning a successor state at random to each state in state space, possibly with some bias (rules and wiring previously set are ignored). The attractor basins are reconstructed by reference to this random map with the exhaustive testing algorithm. The space of random maps for a given system size corresponds to the space of all possible basin of attraction fields and is the super-set of all other deterministic discrete dynamical systems.

Sequential updating

By default, network updating is synchronous, in parallel. DDLab also allows sequential updating, both for space-time patterns and attractor basins. Default orders are forwards, backwards or a ramdom order, but any specific order can be set out of the $n!$ possible orders for a network of size $n$. The order can be saved/loaded from a file.

Sculpting attractor basins

Learning and forgetting algorithms allow attaching and detaching sets of states as predecessors of a given state by automatically mutating rules or wiring couplings. This allows ``sculpting'' the attractor basin to approach a desired scheme of hierarchical categorization. Because any such change, especially in a small network, usually has significant side effects, the methods are not good at designing categories from scratch, but might be useful for fine tuning a network which is already close to where its supposed to be.

When an attractor basin is complete, within the learning routine, a ``target'' state, together with a number of ``aspiring pre-images'' (predecessors) can, be selected. These states may be just highlighted in successive mutant attractor basins, or the learning/forgetting algorithms will attempt to attach/detach the aspiring pre-images to/from the target state, and can be set for either rule-table bit-flips or wire moves. In fact the bit-flip method cannot fail. New attractors can be created and sub-trees transplanted. The result of learning/forgetting, including side effects, will be apparent in the new attractor basins. The algorithms, and their implications are described in\cite .

More generally, a very preliminary method for reverse engineering a network, also known as the inverse problem, is included in DDLab, by reducing the connections in a fully connected network to satisfy an exhaustive map (for network sizes n<=13. The inverse problem is finding a minimal network that will satisfy a full or partial mapping (i.e. fragments of attractor basins such as trajectories).

Hardware and software requirements

Compiled versions of DDLab currently run on the following platforms:

Compiled versions of DDLab for other platforms may also be available, for example hpux/X11/HP (Rick Riolo (rlriolo@umich.edu) made a hpux/X11/HP port in 1996) and Debian Linux (Valerio Aimale (valerio@biosgroup.com) made a Debian Linux port in 1999). See one of the following DDLab web sites for current information,

I am reliably informed that the DOS version of DDLab runs on a Mac using Virtual PC.


References


back to the start
back to the DDLab home page
Last modified: July 2003