A issue related to the above is this: could one get more computational
power by not using aggregates of quantum phenomena, but by using
individual quantum events? In the network I have described, the
mapping is from ensembles of quanta hitting the plate to
outputs. One could instead imagine a faster-scale form of computation
in which individual quanta hitting the plate are interpreted as
outputs. As part of an inherently stochastic process, each quantum
hitting the plate conveys not determinate information about the slit
configuration, but probabilistic information: what the likely
configuration of the slits is. This kind of information may be used on
its own during actual computation, but during learning, it seems most
likely that many samples will have to be used in order for the network
to learn the proper statistics, regardless of whether or not the
weight changes occur after each quantum or only after an ensemble.