My research interests include:
Some selected papers are also listed below.
Language-Based Interfaces in Ubiquitous Computing Environments
In project
NatHab In joint work
with David Weir and Ian Wakeman I am investigatinglanguage-based interfaces to ubiquitous computing environments.
The aim is to allow non-technical users in the home and office to personalise their
environment using natural language "policies" for service configuration (see
Weeds et al 2004)
Grammatical Inference/Language Modelling
I am interested in the application of machine
learning techniques to problems in language learning/grammatical
inference. Practical techniques for grammatical inference have great
application to current work in NLP and speech recognition, as well as
wider application to syntactic pattern recognition.
In joint work with
Rudi Lutz I have explored the use of genetic algorithms to infer
stochastic context-free grammars from language data. The problem
addressed in this work is that of inferring a suitable grammar for a
target language given a stochastic sample of the sentences in the target
(i.e. no `negative information' or `teacher'). We adopt a Bayesian
approach using the genetic algorithm to `evolve' the most probable
grammar given a corpus. To avoid over-fitting of the data the Minimum
Description Length Principle is used to provide an informative prior for
candidate grammars. (See:
Keller & Lutz, 1996,
Keller & Lutz, 1997, Keller and Lutz 2004). We have also explored the use of priors
in conjunction with the EM algorithm in the context of learning Hidden Markov
Models (Keller & Lutz, 2002).
I was the organiser of a workshop on
Automated Acquistion of Syntax and
Parsing
at the 10th European Summer School in Logic, Language and Information
(ESSLLI-98).
A good deal of my early research was concerned with the
formal foundations and computational properties of formalisms in computational
linguistics (see e.g. Keller 1992a)
My D.Phil work dealt with unification-based grammar formalisms
and the properties of logical languages (so-called feature logics)
for expressing constraints on linguistic objects. I investigated a
generalization of one of these logics, Rounds-Kasper logic, which
incorporates the device of functional uncertainty due to
Kaplan and Zaenen (see
Keller, 1992b, Keller, 1993)
Unification-based grammar formalisms tend to be very powerful
mathematically (as powerful as general-purpose programming languages).
Joint work with David Weir
led to the development of a restricted unification-based formalism that is
more powerful than Linear Indexed Grammar (LIG), but which can also be
processed in polynomial time using techniques that are similar to those
developed for LIG in Vijay-Shanker and Weir, (1993b). The formalism,
(which we refer to as partially linear PATR) manipulates feature
structures rather than stacks (see Keller
and Weir, 1995)
I have also investigated the lexical knowledge representation
language
DATR, which was originally developed by Gerald
Gazdar and Roger Evans
at the University of Sussex, and is probably
the most widely used language for specifying natural language lexicons
in the NLP community. I provided the first declarative
semantics for the full DATR language in
Keller,
1995, and presented an alternative, operational semantics, which axiomatises the
evaluation of DATR expressions in
Keller,
1996.
Selected Publications
Back to Contents