next up previous
Next: 27.1 Highlighting states. Up: Program Reference. Previous: 26.5.5 Further optionsfirst

27 Learning, forgetting, and highlighting.

The global dynamics of a network, as represented by its basin of attraction field, sums up its repertoire of behaviour, where the root of any subtree categorises all states in the subtree. This hierarchical categorisation implicit in a given network may be seen as its ``content-addressable'' memory in the sense of Hopfield (1982), but with the added notion that every subtree root (not just point attractors) provides a memory category, .

For networks with mixed wiring and/or rules, DDLab allows one or more states to be added or deleted as pre-images of a given state. Adding or deleting pre-images is analogous to learning and forgetting. The network architecture is automatically revised to produce the required change either by moving wires or mutating rules, using learning algorithms analogous to back propagation.

It turns out that the rule mutation algorithm is bound to learn a list of aspiring pre-images without forgetting the state's pre-existing pre-images, though pre-images not on the list may also be learnt. There is just one specific rule scheme mutation, which is bound to succeed. By contrast there are many alternative wire move mutations that may achieve the desired result, though success is not guaranteed, and pre-existing pre-images may be forgotten.

When applying both rule and wire learning algorithms, side effects are bound to occur elsewhere in the basin of attraction field. In small networks, the revised attractor basin may be re-drawn to show the results and side affects of learning. The attractor basin may be progressively sculpted (and the network adapted) to produce a desired scheme of hierarchical categorisation. In general, forgetting pre-images causes less severe side effects than learning, as minimal changes to the network architecture are required.

Note that learning by wire moves requires a network with ``random'' or non-local wiring (see #11.3 to treat local wiring as if it where random). Mixed neighbourhoods are considered to always have non-local wiring in this context. Learning by rule mutation requires a network with mixed rules (see #13.3.1 to set up a rule mix where all the rules are the same).




next up previous
Next: 27.1 Highlighting states. Up: Program Reference. Previous: 26.5.5 Further optionsfirst