The same goes for the Chinese room argument. This argument [12] aims to show that strong AI is false: all mental states cannot be had simply by virtue of implementing the right program, since in particular the mental state of understanding Chinese cannot not be had simply by implementing the right program. Searle's argument for this was that there is no program such that if Searle implements it, he will thereby come to understand Chinese - all he will be doing is meaningless symbol manipulation. So if one could get a computer to understand, it would have to be by virtue of something other than the fact that it was implementing the right program P. In computational terms, Searle implementing P and any computer implementing P are identical - so if the computer does actually understand Chinese, it must be at least partially in virtue of a non-computational fact.
The transparent reading of computationalism allows one to resist this conclusion. One could agree that perhaps according to current theories of computation, Searle and any other system that is implementing P must be in the same computational states. But it might be that according to a better theory of computation, there are computational differences that are mere implementation detail according to current theory. So mental properties might supervene on computational properties in the transparent sense even if they do not supervene on algorithmic properties. Specifically, one could claim that program P when implemented by a non-understander is sufficient for understanding Chinese, but program P when implemented by an understander (such as Searle) is not. If one could also motivate the claim that the distinction ``implemented by an understander vs. not" is a computational one, then one would have a means of resisting Searle's conclusions. 8
A similar move can be used to counter another Chinese problem for AI, Block's claim that although the entire population of China could be linked up to realise some program supposedly sufficient for understanding, it seems absurd to say that thereby the entire nation of China was having a conversation about the Mets or whatever. If realisations that involve understanders are computationally distinct from those that do not, then one is not committed to saying that a program which is sufficient for understanding English when realised by a conventional computer is also sufficient for such understanding when implemented by the population of China.
The problem with trying to make the transparency move against either Searle's or Block's objection is that we have no independent grounds to suppose that ``implemented by an understander vs. not" will be an interesting, generalization-supporting, explanation-providing distinction in millennial computer science. Thus appeal to it is merely post hoc, or a version of the Many Mansions reply [12]. As with the diagonalization arguments, it is probably best not to reply to Searle and Block using the transparency reading of computationalism, at least not in this way - some other response is required. Fortunately, many are at hand.
But perhaps transparent computationalism isn't finished here. Harnad [6], convinced by Searle's argument and not impressed with the many attempts to rebut it, has conceded that more than symbol processing is required for cognition - symbols need to be grounded in non-symbolic interactions with the world. If Harnad is right, there might be trouble ahead for opaque computationalists, who are typically thought to take computational properties to be independent of computer - world relations. But a transparent computationalist need not be worried, under one condition - that non-symbolic interactions with the world are seen to be crucial to understanding uncontroversially computational systems as well.