What might dynamical intentionality be, if not computation?
Commentary on van Gelder's ``The Dynamical Hypothesis In Cognitive Science"
Ronald L. Chrisley
School Of Cognitive Computing Sciences
University of Sussex
Falmer BN1 9QH
United Kingdom
ronc@cogs.susx.ac.uk
http://www.cogs.susx.ac.uk/users/ronc
(Words within "*" are to be italicized)
ABSTRACT
I make five points. 1) I note that van Gelder's concession that the
Dynamical Hypothesis is not in opposition to to computation in general
does not jibe well with his anti-computational rhetoric. 2) I dispute
his claim that dynamic systems allow for non-representational aspects
of computation in a way in which digital computation cannot. 3) I
distinguish two senses of the ``cognition is computation'' claim, and
point out that van Gelder argues against only one of them. 4)
I suggest that dynamical systems as characterized in the target
article suffer from the same problems of universal realizability as
formal notions of computation do, but differ in that there is no
solution available for them. 5) I show that the Dynamical Hypothesis
cannot tell us what cognition is, since instantiating a particular
dynamical system is neither necessary nor sufficient for being a
cognitive agent.
Given van Gelder's concession (in sections 6.3, 6.5 and 6.10) that he
is not opposing computation in general, just digital computation in
particular, I have no disagreement with his main point. It is indeed
an open empirical issue which kind of computation best characterizes
natural cognitive agents. However, I do object to the misleading way
in which he goes about stating this. Yes, ``research into the power of
dynamical systems is an interesting new branch of computation theory"
(page 15). But with that considerable concession in mind, van Gelder
shouldn't have thought he was rejecting effectiveness; he was only
pointing out that processes which are quantitative (at the ``highest
level") can be effective -- effectiveness need not imply digitality.
And he shouldn't have named the view he is opposing ``the computational
hypothesis" when it is really a specific form of digital computation
which is his target.
Although van Gelder wisely avoids the anti-representationalism that
has been the focus of some recent dynamical criticisms of
computational accounts of cognition, he fails to resist mentioning
anti-representationalism altogether (section 4.2.3.9). He is mistaken,
however, in thinking that only quantitative systems can accommodate
non-representational aspects of cognition. For example,
Brooks (1992) has famously rejected representations in the
construction of mobile robots which behave intelligently in real time
in the real world, yet his subsumption architectures are not
quantitative, but rather are of the same kind as digital computational
architectures. Perhaps it is right to reserve the term ``computation"
for processes that involve representations. But then there is a
natural superclass of digital computation, let us call it the class of
``digital machines", which stands in the same relation to digital
computation as dynamical systems stand to dynamical
computation. Despite recent rhetoric, there is no reason to believe
that dynamical systems have any ``non-representational" advantage over
digital machines.
A distinction should be made between two senses of the claim
``cognition is computation". One sense (call it the ``opaque
reading") takes computation to be whatever is described by our current
computational theory, and claims that cognition is best understood in
terms of that theory. The transparent reading, by contrast, has its
primary allegiance to the phenomenon of computation, rather than to
any particular theory of it. It is the claim that the best account of
cognition will be given by *whatever theory turns out to be the best
account of the phenomenon of computation*. The opaque reading is a
claim about specific theories, while the transparent claim is a claim
about the phenomena of computation and cognition themselves. The
``cognition is computation" claim can be true on the transparent
reading, even if cognition isn't best understood in terms of, say,
formal operations, just as long as such operations turn out not to be
good accounts of what makes actual computers work. I'm one of those
people who believe formal notions of computation to be inadequate
theoretical accounts of actual computational practice and artifacts
(what Brian Smith (1996) has called ``computation in the wild").
van Gelder, however, insists (in section 6.5)
on opposing himself to the formal notion of computation. This is
understandable, since the formal view of computation is the de facto
orthodoxy, and we are still waiting for a non-formal theoretical
alternative. But if it turns out that what makes the artifacts of
Silicon Valley tick is not best explained in terms of formal
computation, then van Gelder's discussion will have nothing to say
against the transparent version of the ``cognition is computation"
claim.
But van Gelder's focus on formality in characterizing his opponent
seems to have the unfortunate consequence of causing him to
characterize dynamical systems as formal also. A recurring criticism
of the computational approach is that its formality renders it
universally realizable -- Putnam (1988) and Searle (1990) argue that
any physical system can be interpreted as realizing any formal
automaton. This has the consequence that an account of cognition
cannot be in terms of formal computation, since any particular formal
structure, the realization of which is claimed to be sufficient for
cognition, can be realized by any physical system, including those
that are obviously non-cognitive. Dynamical systems as van Gelder
characterizes them, also seem to be universally realizable in this
sense -- one can employ Putnam's tricks to show that every physical
system instantiates every dynamical system. But the difference is
that there is a known way out of this problem for digital computation,
while there is not for dynamical systems. Since computation is not
purely formal, but includes an implicit notion of discrete states and
causal transitions between them, one can use this to restrict the set
of physical systems that can be properly said to instantiate any given
computation, thus avoiding universal realizability (Chrisley 1994).
But how are we to so restrict the set of
physical systems which realize any given dynamical system, without
rendering the dynamical system non-quantitative in the process?
van Gelder's response to the ``Not As Cognitive" objection (section
6.7) won't help him here. What he says is correct: just as the
digital computation hypothesis does not claim that all digital
computers are cognizers, but rather that cognizers are a special kind
of digital computer, so also, mutatis mutandis, for the Dynamical
Hypothesis (DH). The DH is not giving sufficient conditions for
cognition. But it does claim that the sufficient conditions can be
given in terms of dynamical systems, as he has construed them. And
the universal realizability points just made cast doubt on that.
Perhaps the universal realizability point can be countered for
dynamical systems, as it was for digital computational systems.
Nevertheless there is a difficulty that arises out of van Gelder's
admission that the DH is not providing sufficient conditions for
cognition: it puts all the weight on the other foot. It implies that
the theoretical value of the DH must be in its providing *necessary*
conditions for cognition. But van Gelder admits that the DH is *not*
giving necessary conditions for cognition, either. Since it takes no
stand on the nature of artificial cognition (section 4, paragraph 2),
the DH is not a constitutive claim about the essence of cognition in
general, but rather a contingent claim about natural cognizers. Aside
from relying on a natural/artificial distinction which removes us and
our artefacts from the natural world, rather than seeing us/them
continuous with it, the DH has the drawback of leaving us without a
constitutive account of cognition. The most likely place to look for
such an account is not in the particularities of natural cognizers,
but in the commonalities between all of the systems worthy of the
title: natural cognizers, natural cognizers in other possible worlds,
and (as yet hypothetical) artificial cognizers. E.g., what do (natural)
quantitative intentional effective systems and (artificial) digital
intentional effective systems have in common? Intentional
effectiveness. Perhaps, then, that is the true nature of cognition.
REFERENCES:
Brooks, R. (1992) Intelligence without representation. In:
Foundations of Artificial Intelligence, ed. D. Kirsh. MIT Press.
Chrisley, R. (1994) Why everything doesn't realize every computation.
Minds and Machines 4(4):403--420.
Putnam, H. (1988) Representation and Reality. MIT Press.
Searle, J. (1990) Is the brain a digital computer? Proceedings and
Addresses of the American Philosophical Association 64.
Smith, B. C. (1996) On the Origin of Objects. MIT Press