previous up contents next
Left: 6. Continuing Medical Education Up: Developing the Attributes of Right: 8. Assessment and Revalidation

Subsections

---------------------------------------------------------


7. The Role of Information Technology

We can divide Information Technology into two main types. First, ``Tools'', IT to support doctors and dentists in the everyday conduct of their jobs; for example an MR scanner, a patient record system, or a decision support system. Second, ``Training Systems'', IT designed explicitly for training; for example a CD-ROM containing dermatology images and related text or a computer-based teaching system for a surgical technique.

Tools play a role in training not only because doctors need to be trained in their use but also because some tools can be exploited indirectly for training purposes, irrespective as to whether they incorporate a special training mode. For example, a decision support system can be used to help trainees reflect on their own decisions by providing an alternative analysis, not necessarily better or worse, to that of the trainee (some support for this notion is provided by Tape et al. (1992) who show that providing computer-generated corrective feedback of students' risk predictions of cardiovascular death based on the presence or absence of five factors, improved both the students' base rate calibration and discrimination). Moreover a database system can be used to store information from neurology department `morning reports' that would otherwise be hard to find later, and then reuse this information for training purposes (Recht et al., 1995).

As far as tools go the main educational questions concern

For example, Kushniruk et al. (1996) describe a cognitive approach to evaluating a computerized patient record system and found that ``two major classes of problems can be defined: problems associated with the user interface, and conceptual problems that arise when physicians try to map and represent findings with terms used by the system.'' (page 414). Similar issues of the effects of mismatch between computer-based representations (of clinical guidelines) and human expert representations are reported by Patel et al. (1998).

As far as Training Systems go the main questions concern

IT in Medicine, as in other fields, throws up two distinct kinds of literature. There are papers which point to the latest technological advances -- Virtual Reality, Multimedia, The Internet, Tele-medicine, for example -- and anticipate many future benefits. Then there are more hard-nosed papers that evaluate training opportunities and outcomes using current IT technology in realistic settings.

In some ways this leads to a confusing picture with the reality often at odds with the promise. The situation is further confused by the varying outcomes of evaluations of computer-based training methods. However its clear that IT and medicine are inextricably intertwined and that the use of IT in training will increase.

In the first ``anticipatory'' category Mooney and Blight (1997) provide a short, upbeat but useful introduction to some of the newer Information Technologies and Hoffman and Vu (1997) review some 40 Virtual Reality systems under development on anatomy and clinical skills teaching. Also in this category are the predictions for the ``Class of 2003'' by (Faughnan and Elson, 1998). They foresee the increasing delivery of ``processed data'', e.g. about treatment options, at the point of care delivery, and the increasing dependence of practitioners on validated and trusted databases rather than the primary (journal) literature. So they see a coming together of evidence-based medicine and clinical information technology to provide the practitioner with whatever information is needed at the time and place where it is needed. Of course this raises the issues of how the practitioner is to be trained to access and to judge the trustworthiness of such information and how systems are to be designed to make information available in an effective manner that meets doctors' needs (Elson et al., 1997). In her introduction to a special issue of the Journal of Artificial Intelligence in Medicine Patel (1998) argues that recent developments in collaborative technologies will cause people to view ``cognition as a distributed process. In this perspective, intelligence, can be seen as distributed in designed artifacts such as computer-user interfaces; in representations, such as diagrams; and through communication in social contexts.'' (page 94). An example of this view is cited below (Shortliffe et al., 1998).

In the same ``anticipatory'' category Ota et al. (1995) suggest that a Virtual Reality (VR) based simulator linked to a fuzzy logic based evaluation system has great potential for the problem of training in laparascopic surgery. Likewise Dumay and Jense (1995) anticipate the benefits of VR in endoscopic surgery. Their paper provides a useful introduction to VR and its potential use in this kind of application.

By reference to artificial intelligence techniques in training areas other than medicine, Lesgold and Katz (1992) describe how collaboration and negotiation among differing viewpoints could be incorporated into medical training systems.

In the second ``realistic'' category, the survey conducted by Kinn (1996) paints a more depressing picture of the speed of the introduction of IT into the NHS out-pacing the possibilities of post-graduate medical education to provide sufficient training, particularly for practitioners. By contrast Chopra et al. (1994) provides evidence that high-fidelity computer-controlled simulators can provide effective training especially in those areas where there is a need to upgrade ``competence in handling those uncommon but potentially fatal problems that require rapid and correct responses, without exposing the patient to risk''. The domain they evaluated concerned problems during anaesthesia, but the argument can be made more generally. The positive evaluation of the learning took place four months after the anaesthetists had practised dealing with malignant hyperthermia on the simulator.

While more concerned with research than with training, Shortliffe et al. (1998) provide a detailed account of the way that five leading North American medical institutions are working towards a shared software infrastructure (InterMed) that will allow effective collaboration in many different areas, such as the development of guidelines, medical records, intergrated training, decision support systems and so on. The paper analyses the way that different technologies (e.g. email, conference calls) contribute to various kinds of collaborative decision-making and to some extent shows the ways things are likely to develop in the UK. An indication of the speed of change of technology (i.e. pre-Internet) is provided by Keay et al.'s (1989) description of a computer-based information system for Postgraduate Medical Education in the west of Scotland, enabling online library services, computer assisted learning, word processing and statistics.

Although focused on medical students, Friedman (1996) lists ten reasons why the World Wide Web may not prove as useful in medical training as some would hope. These include the poor integration of CAI material into the curriculum, the lack of standards for judging the quality of CAI programs and the poor design of some, the failure to update the content of programs, insufficient access to computers, the mismatch between skills or knowledge taught and those needed and/or assessed, poor response times on the Internet. Addressing at least some of these issues, Miller and Wolf (1996) provide a helpful account of how to make CAI work in practice based on experience at the University of Michigan Medical Center. Their paper also details a number of catalogues of medical training software.

Finally Lillehaug and Lajoie (1998) provide a helpful, detailed and critical review of the application of artificial intelligence in medical education. They discuss the disparity between the promise and the outcomes of computer-based medical education in general, and artificial intelligence-based medical training systems in particular and describe a number of representative training systems.

7.1 Training Systems

The literature on training systems in medicine is large and scattered through a wide variety of sources including journals and books on computers and education in general, via journals on medical education through to journals for particular medical specialities. The following brief section simply provides a general indication of what is on offer.

7.1.1 Psychomotor Skills

Various systems have been developed both for assessing (see for example, Derossis et al., 1998b; Jones et al., 1997) and for developing psychomotor skills. These systems are designed largely for pre-registration rather than post-registration, but the lessons from these pieces of work generally apply more widely.

For example, Hong et al. (1996) evaluated a system for clinical clerks that provided a preoperative tutorial supplemented by a computer-based package for examining and interacting with anatomical images appropriate to the operation about to be performed. Their evaluation showed that the tutorial plus the computer-based system was beneficial in terms of the clerks' subsequent operating performance, but they did not attempt to assess the relative benefits of the tutorial, delivered by an expert, and the computer-based anatomy package. Matthew et al. (1998) describe a computer-based system to teach minor oral surgery. The system offers multiple choice questions some of whose answers involve locating the correct point in an associated image or moving a cursor across an image, to indicate, for example, where to draw the needle when suturing. The evaluation of the system was in terms of what the users thought was good and bad rather than via an analysis of pre/post comparisons of increases in skill. Rogers et al. (1998) compared the efficacy of a computer-based system compared to a seminar in teaching 82 medical students how to tie a two-handed square knot. Although both groups learned to tie the knot, and could do so in similar times, the computer-based group produced poorer quality knots. A crucial difference between the two groups was that those taking part in the seminar received feedback from experts during the seminar, whereas the computer-based group received no feedback. This highlights the consistently hard problem for computer-based training of having the system monitor, evaluate and react sensibly to the learner's attempts to master the skill or solve a problem.

One partial way out of the feedback problem is to integrate the computer-based teaching system with ``real'' psychomotor skills laboratory. That way the students can compare how they perform suturing using, for example, reusable prosthetic skin, with multimedia video clips showing how it should be done (O'Connor et al., 1998).

An important issue, especially in surgery concerns the fidelity of the system and the degree to which skills learned with the computer-based system transfer to the real situation. Despite Chopra et al.'s (1994) positive evaluation in anaesthesia, cited earlier, Chapman et al. (1996) Chapman et al. (1996) found that for open thoracotomy assessment, performance of the task on a pig was a better discriminator of skill level than performance of the task on a computer simulation. They also found that practice with the computer simulation did not improve later assessment performance on a pig but did improve later assessment performance on the computer simulation.

7.1.2 Perceptual Skills

A computer-based system that makes no attempt at evaluative interactivity is a CD-ROM addition to a dermatology course (Hartmann and Cruz, 1998). The CD-ROM supplemented an existing course using live-patient sessions and various sources of visual material. The CD-ROM added a 100 pages of text and diagrams. In a comparison with the course prior to the introduction of the CD-ROM, the medical students highly rated the CD-ROM but did no better in the examinations as a result of its introduction.

While there are many computer-based training aids for radiology, most are essentially electronic books or collections of images together with some kind of indexing mechanism, normally based primarily on disease. There have been relatively few systems that attempt to either model the domain or the evolution of knowledge and skill of the student in a detailed way, i.e. provide the evaluative interactivity lacking in more straightforward computer-based training packages. Of these, Azevedo and Lajoie (1998) describe an analysis of the problem solving operators used in mammography as applied by radiologists of various levels of skill. They also analyse the nature of teaching as it occurs in radiology case conferences and particularly the way that experts articulate their diagnostic reasoning. Both these analyses are used as part of the design process for RadTutor (Azevedo et al., 1997). A similar careful analysis in the domain of chest X-rays has been carried out by Rogers (1995a) as part of the design process of VIA-RAD tutor. Macura et al. (1993) have taken a case-based approach in a tutor for CT and MR brain images. Their system offers a case-retrieval and decision-support mechanism based on descriptors. Their system also employs an atlas and contains tutorial material and images of normal brains as well as those displaying lesions. It can act as a decision support system by offering a range of possible diagnoses and access to the images of related cases, given the textual information that has been entered. Sharples et al. (1995,1997) have developed an image description training system that aims to help radiology trainees learn how to describe MR brain images in a systematic way by means of a structured image description language (IDL). This language allows clinically meaningful features of MR brain images to be recorded, such as the location, shape, margin and interior structure of lesions. The system is deliberately aimed to support and train the radiologist's inferences from what can be observed in the images.

An innovative problem-based approach is adopted by Kevelighan et al. (1998). They get their students to develop their own multimedia packages (in areas of obstetrics and gynaecology). This provides the students with useful IT skills, including internet skills, as well as the chance to reflect on the topic chosen by building the package for other students to use. The authors note the problem of preventing over-enthusiastic students from spending too much time as well as the ``significant time and effort to establish the programme''.

7.1.3 Communication Skills

Hulsman et al. (1997) describe a computer-based training system, INTERACT-CANCER, to teach communication skills, particularly those needed in dealing with cancer patients. Their paper offers a good pointer to the literature on communication skills and to related computer-based training systems. Their system consist of four modules. The first is a general introduction to the topic of communication. The second explains how to break bad news and the third is about providing information about treatment and future expectations. The fourth is about the emotional reactions of the patient. Each module offers video-clip examples of both good and bad communication practice and asks questions of the user to help them reflect on what they are seeing and hearing. What the system cannot do is to observe and comment on the user's own communicative competence (see the discussion of the knot tying tutor above: Rogers et al., 1998). Their evaluation of the system concentrated on the perceived value of the system and how it was used but did not attempt to measure changes in communicative competence as a result of using the system.

7.2 Decision Support Systems

The use of Health Decision Support Systems (see e.g., Tan and Sheps (1998) for a comprehensive overview) raises various issues in relation to competence and judgement. Under their earlier title of expert systems such systems offered much promise but did not have a dramatic effect on medical practice. Now there is a more realistic sense of their strengths and weaknesses, and some are in routine use. A comprehensive meta-analysis of the effects of computer-based clinical decision support systems is provided by Johnston et al. (1994). Their paper shows how difficult it is to evaluate the effects of such systems in an unbiassed manner. Having started out examining over 700 papers they eventually analysed the results on patient outcomes and clinician performance of systems described in 28 studies. Within these studies they found only three with positive patient outcomes, and 8 without; however 15 had effects on clinician performance and 9 did not. They call for more research, particular for more research of the same standard as the blind, randomized controlled trials as used to support other health claims (though they acknowledge the methodological difficulties here).

A useful guide to decision support systems currently in routine use can be found here. This covers Acute Care Systems, Decision Support Systems, Educational Systems, Laboratory Systems, Quality Assurance and Administration Systems and Medical Imaging.

Their index (as of January 2000) for Decision Support Systems is presented below in Table 7.1:


Table 7.1: Decision support systems in use.
Name Status Type Entry Date
Dxplain routine use clinical decision support Nov 7 1995
Epileptologists' Assist. decomissioned nurse progress note assistant Sept 23 1997
Jeremiah routine use orthodontic treatment planner Nov 19 1997
HELP routine use knowledge-based HIS Jan 2 1995
Iliad routine use clinical decision support Oct 23 1995
MDDB routine use diagnosis of dysmorphic syndromes Mar 29 1996
Orthoplanner routine use orthodontic treatment planner Nov 19 1997
RaPiD routine use designs removable partial dentures Feb 9 1996


The issue is no longer one of ``can such systems be built and installed in medical care settings?'', so much as ``how useful are such systems in practice?'' and ``how does their use affect human decision making processes?''

A representative paper addressing the issue of the use of such systems is provided by Elstein et al. (1996). They examined the effects of how using Iliad (see above) on four clusters of nine difficult cases affected the decisions of 16 doctors of various levels of experience. Each doctor dealt with nine cases. Their main finding was that Iliad produced a list of diagnostic possibilities containing the correct diagnosis in 38% of the cases. This success rate was worse than the most experienced doctors (43%) but better than for residents (33%) and for fourth year medical students (15%).

Each physician worked on each case both with and without Iliad. Working with Iliad produced improvements in the diagnostic accuracy of 15% of the cases dealt with by experienced physicians and by the medical students but no improvements in cases dealt with by residents. On the negative side, in about 12% of the cases dealt with overall there was a decline in accuracy, with the largest decline among the residents (statistical significance not reported).

Methodologies for assessing how decision making is affected by the use of systems such as Iliad are explored by Kushniruk and Patel (1998). They report a variety of effects such as problems in navigating through such systems and, more importantly, shifts in the manner of conducting patient interviews from their natural strategy towards conforming exactly with the menu of questions on the screen of the system -- a ``screen-driven'' strategy. They also found that the differences in reasoning strategies between novices and experts (see Section 3.1) meant that the design of such systems needed to be adjusted to take the nature of the user's expertise into account in order to reduce the chances of mismatch between the line of reasoning of the system and that of the doctor. A similar IT-based mismatch issue is described by Cytryn and Patel (1998) who outline the problems that patients have when using a ``telephone-based telecommunications system'' to describe their symptoms to a remotely located doctor. Analysis of the interactions showed that the organization of the information required by the system suited the doctor's way of thinking but did not suit the patent, with consequent errors of communication.

Tan and Sheps (1998) identify the the major changes in direction that decision support systems need to take as follows:

The latter point links back to Kushniruk et al. (1996) above with the need to ensure that physicians can understand how the decision support system has framed the problem so that they can judge the quality of the advice that is being proffered.

7.3 Summary

A great many systems have been developed for various aspects of training, but most come up against the the consistently hard problem of having the system monitor, evaluate and react sensibly to the learner's attempts to master a skill or solve a problem. Decision support systems have had a mixed reception over the years, but seem to be growing in acceptance as just another tool in the doctor's armoury. Researchers foresee an increasing delivery of ``processed data'', e.g. about treatment options, at the point of care delivery, and the increasing dependence of practitioners on validated and trusted databases rather than the primary (journal) literature. They anticipate a coming together of evidence-based medicine and clinical information technology to provide the practitioner with whatever information is needed at the time and place where it is needed.

---------------------------------------------------------

previous up contents next
Left: 6. Continuing Medical Education Up: Developing the Attributes of Right: 8. Assessment and Revalidation
Benedict du Boulay, DOH Report pages updated on Friday 9 February 2001