It must be made clear that adopting the transparent reading of the computationalist claim is not just semantic gerrymandering. Some might suspect that all that is being proposed is a change in the meaning of ``computation'' (and thus ``computationalism'') in a post hoc way that saves the ``cognition is computation'' claim, but only at the cost of either circularity or changing the subject. In this section I want to dispel such suspicions.
To see that the transparent reading is not circular, one can contrast it with a move that would be: claiming on the one hand that cognition should be explained using computational concepts, and yet also claiming that computational concepts are whatever concepts give the best account of computational systems, including cognitive systems. If you use cognitive systems to help define what computational concepts are, then the computationalist claim will lose its bite, will be circularly trivial.
But notice that this is not what is being done on the transparent reading. A distinction is made between computers and cognizers: the former are not defined in terms of the latter. Rather, the transparent reading assumes that we have some pre-theoretical, ostensive access to the phenomenon of computation (what PCs do, what iMacs do, etc.); likewise for the phenomenon of cognition (what a person playing chess does, what I do when I try to find the restaurant where we agreed to meet, etc.). The transparent computationalist claim is that whatever concepts give a best account of this stuff (gesturing toward the computational phenomena) also give the best account of that stuff (gesturing toward the cognitive phenomena).2
In order for computationalism to be correct, it doesn't have to be the case that the set of concepts eventually arrived at do justice to everything in the initial, pre-theoretic cognitive ostension; nor, for that matter, to everything in the initial, pre-theoretic computational ostension. The idea of a best account, it seems to me, is the simplest one which covers as much of the ostension as possible. It might turn out that the best theory that we can get rejects as non-computational some things that we pre-theoretically took to be computational (calculators, perhaps). And it might turn out that some of the things that we pre-theoretically thought were cognitive turn out not to be (or things that we thought were not cognitive actually are) because the best account of the central cases implies that they are not (or are).
This idea of letting the ``best'' account do violence to, or override, our pre-theoretical intuitions might appear to contradict the ethic behind the transparent reading, which I said was to give one's ``primary allegiance to the phenomenon of computation, rather than to any particular theory of it''. If we discard whatever bits of the pre-theoretic notion of computation (or cognition) don't fit in with our theory, how can we say that our loyalties are with the territory and not the map?
This point is well taken. The approach that is being rejected here is one in which the defining theoretical concepts (e.g., that of Turing machines) are fixed in advance, with the domain of empirical interest then being whatever aspect of the world is best understood in terms of Turing machines. But to reject this theoretical dogmatism does not require one to take up its opposite, which is empirical or intuitional dogmatism. To understand that theory is the map, and to understand that it is in tension with experience or intuition does not force one to see the latter as the territory. Instead, one can see pre-theoretical intuitions and experience as just more map, albeit of a kind that is in some sort of epistemological opposition with theory. The territory is neither theory nor intuition, but is constructed out of a dialectical interplay between the two. Transparent computationalism, then, asks us to let our theoretical notions of computation be driven by our experience, but also recognises that what we experience as computational will and should change in response to our changing theory of computation.
Here's a sample trace of that dialectic in action, using a perhaps over-familiar, but non-computational example. Our pre-theoretical intuitions were that whales are fish, so they were entered into the pool of phenomena against which we tested theories of fish. Inasmuch as we did that, we were not being theoretically dogmatic - we were letting the our intuitions have a say (contrast this with the Scholastic or Rationalist who may have tried to derive the classes of animals and their nature from first principles). But once a proper, successful theory of fish was settled on, it determined the extension of interest, excluding whales, since they don't meet the criteria for fish under the adopted theory (specifically, they don't use gills to extract oxygen from water).3 In that we allow whales to be so excluded, we are not being empirically or phenomenally dogmatic, we are allowing for virtuous conceptual change to occur, rather than insisting on the ways of carving up the world that the ancients (or children, or our naive selves) had. Eventually, this theoretically-driven extension may become our intuitive, common-sense way of looking at the domain of fish (and mammals). We may even call it ``pre-theoretic", despite the fact that it was historically shaped by theory. This new intuitive notion of fish becomes the tribunal for our theories of fish, putting pressure on those theories to do justice to the pheneomenon of fish as now understood. Thus the process iterates.
The point is that this parable about fish also applies to computation (with pocket calculators perhaps playing the role of the whales). Just as our intuitive notions of fish (and gold, and water, and just about everything else) have driven yet been altered by our theories of those natural kinds, so also shall (and should) our intuitive notions of computation constrain and be constrained by our theories of computation.
With all that in place, it is now a relatively easy matter to respond to the other worry stated at the beginning of this section: Is transparent computationalism post hoc and just changing the subject? In a sense, yes. But in a more important sense, no. It isn't in the sense that scientific progress in general isn't. To reiterate with a less fishy example, it's true that we now mean something different by, e.g., gold than the ancients did: they included just about any gold-coloured metal into that category. But the best way to understand scientific progress is not to see them as being right about their notion of gold, but rather as wrong about gold itself, of which we now have a better understanding [10]. It is gold itself that unites ancient theorising with current theorising - if we couldn't unite the two, by saying that we and the ancients were striving for an understanding of the same thing, then we would have no grounds on which to say that our account was an improvement on theirs. We would have to say that we have just changed the subject. But today's chemists are theorising about the same stuff that Archimedes was, even though what they are thinking of necessarily has atomic number 79 and Archimedes didn't even have the concept of atomic number. Therefore, too, future accounts of computation may indeed be accounts of computation, the very same phenomenon that we are trying to understand today with the notions of recursive functions, algorithms and the like, even if it is determined that such notions are not constitutive of computation.
However, this only establishes that transparent computationalism is possible; it does not guarantee that just any notion constrained by some future theory T can be considered a successor to the current notion of computation. It might be instead that the notions in T eliminate the notion of computation; or T may just be a different, unrelated theory. What must be true of T in order for it to be about the same ostensively individuated phenomena as current theories of computation. That is, what must be true of T in order for it to be an attempt at a theoretical account of what we pre-theoretically take to be computation? And what must be true of the notions T yields for them to be refinements to rather than usurpers of the notion of computation? For example, a typical tactic when taking the transparent computationalist line is to say something like ``Yes, much of current computation is essentially digital. But there is some computation, such as what goes on in connectionist networks, which is not digital. So criticisms of computationalism that assume digitality are misplaced". But this assumes that an account of connectionism together with digital computation should be considered an account of computation, rather than an account of some category which includes computation and other phenomena besides. What justifies this?
To some extent, it can't be justified - at least not in any theoretical manner. The determinants of the successor relation between theories will have to be to some extent extra-theoretical. By its non-conceptualized nature, the pre-theoretic view of a domain is, to some extent, non-rational. But it is not entirely non-rational, and it is certainly normative. There are several non-theoretical reasons why we should include a new phenomenon, ostensively individuated, into a previously existing pre-theoretic class. For example, we might pre-theoretically call some new device (the ``Watt machine") a computer, despite the fact that it is analogue and non-algorithmic, because it was produced by Intel (perhaps even by the same scientists and engineers who produced the Pentium III), requires many of the same materials and production procedures as does the Pentium III, can be used to perform tasks that we take to be pre-theoretically in the same class as the tasks we use computers for, etc. But again, we must not replace theoretical dogmatism with pragmatic dogmatism. If it turns out that there is no simple unified theory which accounts for both Watt machines and computers, then we would have to deem the Watt machine non-computational, despite the non-theoretical similarities to PCs. But if there is such a comprehensive theory T, the non-theoretical connections between the PCs and Watt machines would be sufficient to establish T as a refinement of our previous theories of computation. In such a case, the Watt machine would be confirmed as a computer.
This recognition of non-theoretical constraints on the theory/data dialectic also allows us to answer some other questions. Go back to the parable of the whales and fishes; at the end it was suggested that the theory refinement/intuition refinement cycle iterates indefinitely. But why should it? It seems that one cycle is enough: start out with an intuitive notion, come up with a theory that attempts to do justice to it, and use the best theory to go back and trim off the bits that don't fit well with the theory. How could the theory-tailored intuitions in turn demand a change in the theory which tailored them? The answer, as we have seen, is that theories aren't the only factors shaping our intuitions - the non-theoretical constraints provide perturbations that require theory to be ever ready to respond. Of course, it is an empirical issue whether a stabilised intuition/theory relationship can be found relatively quickly. The answer depends on such factors as the importance of the notion to the power structures in society, its relevance to current technological innovations, the inherent complexity of the theory involved, etc. It is my contention that the importance of computation in contemporary society, the fast rate of change in the technology, and the complexity of the artefacts involved make a quick quiescence unlikely.
So far I have focussed on the case of extending the concept of computation to new cases. But of course a change in the concept of computation might occur because it is realised that that change would do better justice to paradigmatic examples of computational systems. This provides an even more effective way of using the transparent reading of computationalism as a way to reply to its critics.