Entering ``l'' at #14.2 or #14.3 starts the learning/forgetting routine for attaching/detaching sets of states as pre-images of a target state. The network's wiring and/or rule scheme is automatically amended to achieve the required transitions between states. The learning routine is described in detail in #27.
Learning is also invoked in the context of ``sculpting attractor basins'' (see #27). In this case, the results and side affect of learning are displayed when attractor basins are generated, but this places a limit on the size of the network (see #6.3).
Invoking the routine at this point in the program, or when running networks ``forwards only``, allows much larger networks to ``learn``.