Whitby, B.R. (2003),
A.I. A
Beginner's Guide,
Oxford, OneWorld Publications
Whitby, B.R. (1996),
Reflections
on Artificial Intelligence: The Social, Legal, and Moral
Dimensions.
Oxford: Intellect Books.
Whitby, B.R. (1988).
Artificial
Intelligence: A Handbook of Professionalism.
Chichester:
Ellis Horwood.
Whitby, B.R. (1993). The
Virtual Sky is not the Limit - The Ethical
Implications
of Virtual Reality.
Intelligent Tutoring Media, Vol.3 No.2.
The allegedly novel technology of virtual reality (VR)
introduces
a number of
difficult moral questions. In spite of the apparent novelty of
the technology,
at least some of these questions can be shown to be variations
on more
familiar moral problems. However, familiar or not, many of the
moral problems
raised by VR need urgent attention and discussion.
Whitby, B.R. (1991). Ethical AI,
Artificial Intelligence Review, Vol.5, No.1.
Being ethical pays, both in business and academia. With
business
moving
towards being more ethically aware, Artificial Intelligence (AI)
can and
should follow suit. There are many positive ethical aspects to
AI.
Whitby, B.R. (1988). A code of professionalism for AI.
AISB Quarterly, No. 64, Spring, pp9-10.
Suggests a Code of Conduct for AI as in 'AI: Handbook of
Professionalism'
(see
books)
Whitby, B.R. (1987). Professionalism and AI.
Artificial Intelligence Review, Vol.2, No.2, pp133-139.
The time has come for those working in AI to take the issue of
professionalism
seriously. Professional standards will be difficult to establish
in AI.
However, there will be pressure from various directions to
produce
a code or
codes which will demonstrate that work is being done responsibly.
such codes
will be largely worthless unless they are produced by people
actually working
at the 'sharp end' of AI.
Whitby, B.R., and Yazdani, M. (1987). Accidental nuclear war:
the contribution
of A.I.
Artificial Intelligence Review, Vol.1, No.3, p221-227.
The AI community is seriously considering what all the
military
sponsorship
would do to prospect of being able to carry out basic research
without, at the
same time, putting the whole of our planet's population at risk.
we examine
the options that face AI researchers. Many have accepted that
military money
is necessary for the survival of the research community and that
the military
intentions are a necessary evil. Others have decided to accept
military money
if it does not involve the development of weapons of mass
destruction.
One
group goes even further and will not accept any form of military
funding.
We opt for the intermediate view. There are some aspects of work
in SAI which
can, perhaps, improve our understanding of the nature of
accidents
which occur
as a result of interaction between humans and complex
technological
systems.
Research in these areas, therefore, is likely to reduce the
possibility
of a
computer-generated Armageddon. The military should, therefore,
support basic
research in AI.
Yazdani, M.,and Whitby, B.R. (1987). Building birds out of beer
cans.
Robotica, Vol. 5, pp89-92.
John Searle's attack on various interpretations of Artificial
Intelligence
(AI) represents one of the most thorough challenges to the
philosophical
foundations of AI. In this paper we attempt to contribute a
growing
body of
arguments pointing out why Searle is mistaken in his attack.
We propose an
analogy between intelligent objects and flying objects, leading
to a
definition of AI similar to that of aerodynamics - one which
attempts to
produce general laws of intelligence in man and machines alike.
Whitby,B.R. (1996) Multiple Knowledge Representations: Maps
and Aeronautical
Navigation in Peterson D. (ed) Forms of Representation,
Intellect,
Exeter.
It is often observed that humans use a vast number of
different
types of
external representations to help them in various tasks. Opinions
vary however,
as to whether this diversity is functional or should be imitated
in
computational systems. This paper focuses on an area where there
is a clear
and defined need to represent information in an unambiguous and
standardized
manner. If we find a significant degree of 'ad hoc' techniques
in this area we
may expect to find far more in areas where mistakes, ambiguities,
and delays
are less critical.
This does not constitute a complete and final argument that
AI should abandon
attempts to find a single method of representing knowledge and
concentrate
instead on ways of integrating ad hoc representations. It is,
however, a very
strong suggestion that this is the way forward.
Whitby,
B.R. (1996), The Turing test: AI's biggest blind alley?. In
Millican
and Clark (Eds) Machines and Thought, The Legacy of Alan Turing
Vol. 1,
Clarendon, Oxford.
Alan Turing's Imitation Game (Turing 1950) has provided a
'gold
standard' for
many branches of Cognitive Science for nearly fifty years. It
is now, however,
time to consign its influence to history. This paper argues that
the
'imitation game' has been consistently mis-read as providing
an operational
definition of intelligence based on a comparison with human
performance.
Such
an operational definition is unhelpful for AI considered as
science
because it
deflects effort away from the scientific problem of defining
intelligence in
ways that do not depend on the human example. It is also
unhelpful
for AI
considered as engineering since it leads to a preoccupation with
human
imitation as a methodology. A better interpretation of the
imitation
game is
suggested.
Whitby, B.R. (1991). AI and the Law: Proceed With
Caution.
In M. Bennun
(Ed.),
Law, Computer Science and Artificial Intelligence, Volume II.
New Jersey: Ablex Publishing Corporation.
Legal practice is an area which is and should remain
characterized
by a
process of social negotiation. It is not, therefore, suitable
for simple
rule-following AI approaches. AI practitioners need to appreciate
this and
their own role in the processes of negotiation.
Whitby, B.R. (1990). AI and the law: learning to speak each
others'
language.
In A. Narayanan (Ed.),
Law, Computer Science and Artificial Intelligence, Volume I.
New Jersey: Ablex Publishing Corporation.
AI will have a subtle and pervasive influence on legal
practice,
more
through its influence as a group of new ways of looking at legal
practice,
than as a technology. This will require changes of attitude in
both legal
professionals and AI technologists. Some useful first steps are
suggested.
Whitby, B.R. (1986). The computer as a cultural artefact. In
K.S.
Gill (Ed.),
A.I. for Society.
Chichester: John Wiley and Sons.
Modern computing has its cultural roots in military technology
and thinking.
The effect of military attitudes about information and
communication
can be
clearly seen in most modern computer applications and certainly
in the
terminology of computing.
Whitby, B.R. (1984). A.I.: some immediate dangers. In M.
Yazdani
and A.
Narayanan (Eds.),
Artificial Intelligence: Human Effects.
Chichester: Ellis Horwood.
AI is developed by an elite. This elite is characterized by
long
experience of
computing. There is a danger, therefore, that the 'computer
metaphor'
will be
misapplied by such people, since their thinking will owe much
to imitation of
computational methods.
Whitby, B.R. and Oliver K. 'How
to Avoid a Robot Takeover: Political and
Ethical
Choices in the Design and Introduction of Intelligent Artifacts'
AISB-00 Symposium on Artificial Intelligence, Ethics and (Quasi-)
Human
Rights.
Predictions of intelligent artifacts achieving tyrannical
domination
over human beings may appear absurd. We claim, however, that
they should not
be hastily dismissed as incoherent or misguided. What is needed
is more
reasoned argument about whether such scenarios are possible.
We conclude that
they are possible, but neither inevitable nor probable.
Whitby, B.R., 'Problems in the Computer Representation of
Moral
Reasoning.
Proceedings of the 2nd National Conference on Law, Computers
and Artificial
Intelligence, Exeter University, November, 1990.
Whitby, B.R.,
'The Turing Test: AI's Biggest Blind Alley?',
Proceedings, Turing 1990,
Milton Keynes: Oxford University Press.
Whitby, B.R., 'AI and the Law: Learning to Speak Each Others'
Language',
Proceedings of 1st National Conference on AI and the Law,
Exeter University, 17-18 November, 1988.
Whitby, B.R., 'Robot Morality',
Proceedings of Conference on Philosophical Aspects of Information
Technology,
Lille, France, 28th May - 1st June, 1985.
Whitby, B.R., and Yazdani, M. 'Accidental Nuclear War: The
contribution
of
A.I.',
Proceedings of Conference on Computers and Accidental Nuclear
War
Manchester Town Hall, November 1985.
Flying
Lessons: What can aviation investigations tell other disciplines about
the human-computer interface?
Whitby
B. 2001 ,(CSRP 533), School of Cognitive and Comnputing Sciences,
University
of Sussex, 2001
Whitby, B.R. "Ethics for Virtual Reality' CSRP 372,
School
of Cognitive and
Computing Sciences, University of Sussex, 1995.
Whitby, B.R., 'Lost on Mars: An example of the design and
development
of
educational software', SEAKE Working Paper W107, SEAKE Centre,
Brighton
Polytechnic, Brighton, 1985.
Whitby B.R., and Gill, C., 'Giving Girls Improved Access to
Computers',
SEAKE
Working Paper Centre W207, SEAKE Centre, Brighton Polytechnic,
Brighton, 1985.