Learning and forgetting algorithms allow attaching and detaching sets of states as predecessors of a given state by automatically mutating rules or wiring couplings. This allows ``sculpting'' the attractor basin to approach a desired scheme of hierarchical categorisation. When an attractor basin run is complete, various ``learning'' windows allow a ``target'' state to be set, together with a number of ``aspiring pre-images'' (predecessors). These may be selected in various ways including by Hamming distance. Learning/forgetting algorithms will attempt to attach the aspiring pre-images to the target state, and can be set for either rule-table bit-flips or wire moves. In fact the bit-flip method cannot fail. In this way, new point and cyclic attractors can be created and sub-trees transplanted. The result of learning/forgetting, including side effects, will be apparent in the new attractor basins. The algorithms, and their implications are described in (Wuensche 1994a).