Computationalism in the philosophy of mind is the claim that cognition is computation. Although most of the work in cognitive science and artificial intelligence (AI) has been based on this hypothesis, computationalism has always had its opponents, and the criticisms are becoming more frequent and widespread. To the old critiques (e.g., the Gödel/Lucas, phenomenalist, frame problem and Chinese room objections) have been added objections based on dynamics, the dispensability or clumsiness of representation, externalism, universal realisation, the incoherence of internal representation, and the unreality of computational content, to name a few. While some of these objections can be refuted directly, others are more difficult in that even if one believes them to be ill-founded, nevertheless something seems right about them. How can we continue to hold onto computationalism when it does seem that, e.g., digitality is restrictive, formal symbol manipulation isn't sufficiently world-involving, and Turing machines are universally realisable to the point of vacuity?
The confusion stems, I believe, from an ambiguity in the computational claim itself, at least as I have expressed in my opening sentence: ``cognition is computation".1 A distinction should be made between two senses of the claim. One sense (call it the opaque reading) takes computation to be whatever is described by our current computational theory (via the concepts of Turing machines, recursive functions, algorithms, programs, complexity theory, etc.) and claims that cognition is best understood in terms of that theory. The transparent reading, by contrast, has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it. It is the claim that the best account of cognition will be given by whatever theory turns out to be the best account of the phenomenon of computation. The opaque reading is a claim about specific current theories, while the transparent claim is a claim about the phenomena of computation and cognition themselves.
Making this distinction allows one to eliminate the confusion posed by some of the criticisms of computationalism. One can agree with the critics that there are aspects of, say, algorithms, which make them unsuitable for understanding every aspect of cognition or mentality. In doing this, one must concede that the opaque reading of computationalism is false, given the central role that algorithms play in our current theory of computation. But one can do that and yet simultaneously (and consistently) maintain the truth of computationalism on its transparent reading, by rejecting the assumption that the best account of computation is along current lines (in this case, as a necessarily algorithmic phenomenon). If the notion of an algorithm, while surely itself computational, need not apply to all cases of computation, then although it is a notion available to computationalists for the explanation of cognition, it is not foisted on them. If cognition is computation, and yet not all computation is algorithmic, then cognition need not be algorithmic. Likewise for other criticisms of computationalism and other aspects of the current theory of computation.
On this analysis, the critics have argued against only the opaque reading of computationalism, have only opposed the current, formal notion of computation founded on Turing machines and the like. This is understandable, since the formal view of computation is the de facto orthodoxy, and we are still waiting for a non-formal theoretical alternative. But if it turns out that what makes the artefacts of Silicon Valley tick is not best explained in terms of formal computation, then said critics will have nothing to say against the transparent version of the ``cognition is computation" claim.