previous up contents next
Left: 2. Competence and Judgement Up: Developing the Attributes of Right: 4. Learning in Clinical

Subsections

---------------------------------------------------------


3. Theories of Expertise

Until recently, research into professional education suffered from two major weaknesses: the knowledge-base of professionals was conceived largely in terms of formal, published knowledge; and research into learning was also focused primarily on formal educational contexts.

Practitioners were usually aware of these weaknesses and expressed their concerns by using the currency of experience (months in a job, or number of cases seen) to complement the system of specialist examinations. The outcomes of learning were expressed in terms of competence (a generic term signifying capability to take independent action with only occasional referral to others) or judgement (a term conveying a mysterious ability to make wise and effective decisions in situations of considerable complexity and uncertainty). There was little attempt to ascertain how competence and judgement are acquired or how the learning of this practical knowledge might be facilitated.

So while educational practice continued to give central place to the doctor's practical experience, the relationship between this experience and the growth of competence and judgement remained mysterious.

Since the early 1970s, researchers have begun to address the hitherto uncharted territory from a number of perspectives. For a review of the evolution of the notion of clinical competence, see Maatsch (1990). These approaches to understanding the nature of medical competence and judgement can be characterised into three main types and one minor type:

Each general approach has its strengths and weaknesses and each illuminates different facets of medical expertise.

Issues that tend to get downplayed in all the above approaches include interpersonal and communicative skills, manual dexterity and hand-eye coordination, and overall issues of patient management and care.


3.1 What Experts Know

3.1.1 Decision Making Skills

Much of the psychological literature on medical expertise has concentrated on the important issue of decision-making. Many of the studies are conducted in the laboratory and involve presenting descriptions of cases to doctors of varying degrees of expertise in order to observe what aspects of the case are paid attention to, what inferences are drawn at various stages, what hypotheses generated and rejected, what overall conclusions derived and what aspects of the case are later remembered. These kinds of study are used to delineate the evolution from novice to expert in terms of how various kinds of medical knowledge are organized in memory, how this knowledge is accessed and how it influences decision-making. Most studies are based in areas of internal medicine. Studies of radiological expertise additionally examine how radiologists view images, and how their perceptual scanning changes with increasing expertise. Relatively few studies examine the knowledge underpinning surgical expertise.

This literature makes it clear that there is an evolution of both knowledge structure and diagnostic skill from novices through intermediates to experts. This is an evolution both in terms of how much is known but, more importantly, in the organization and structure of what is known and how medical problems are represented (Chang et al., 1998). The categories ``novice'', ``advanced-novice'', ``semi-expert'', ``sub-expert'', ``expert'' and even ``super-expert'' (Raufaste et al., 1998) are not well defined and the reported discontinuities of knowledge organization between these categories can make it look as if there are ``stage-like phenomena'' at work. Indeed, there is disagreement among researchers as to whether there are distinct stages or whether these are (to some extent) an artifact of the methodology (see e.g., Patel and Groen, 1991). Whatever the truth of this issue, most researchers agree that the later `stages' of the development of expertise are characterised not so much by further increases in knowledge of pathophysiology but by changes in the organisation of that knowledge to make it more readily and rapidly available. There is also agreement on the importance of decision-making with real cases within working medical settings as both driving that change and shaping its nature. One of the side-effects of this is that an expert's knowledge is highly personal and depends very strongly on the particular cases which that expert has encountered (Grant and Marsden, 1988).

Novice medical decision-making can be characterised as working largely from scientific first principles as opposed to clinical principles. They also reason largely `backward' from hypotheses (though see Arocha et al., 1993), as opposed to `forward' from the data. They are able to construct only a limited set of hypotheses, are not able to evaluate competing hypotheses well and are not able to deal properly with apparently inconsistent data. Although working within a different paradigm (phenomenography rather than cognitive science), Ramsden et al. (1989) capture the essence of the novice approach:

``The distinction reveals itself in this material in the form of a contrast between a focus on specific symptoms and signs, or short links between causes and effects, and the use of structuring principles to systematize the data and relate them to previous knowledge. In relation to the very complex material constituting a disgnostic task, an atomistic (ordering) approach does not mean that analysis, interpretation and organization of the material are completely absent. These learners clearly display the rational use of these skills; many apparently have a knowledge base that is large enough to enable them to deploy them in a way that a more experienced learner would. Lack of procedural skill and preclinical knowledge is not an explanation for their use of atomistic approaches. What appears to be missing in the attempts of the students ...is competence in representing the problem appropriately, so that its inherent structure is maintained.'' (Ramsden et al., 1989, pages 113-114)

By contrast expert medical decision makers work with highly structured knowledge (Bordage, 1991) that provides various kinds of shortcut to the small set of hypotheses that need to be considered in any situation. While the fact that experts' knowledge is highly structured is relatively uncontentious, the exact nature of these structures is a matter of debate (e.g. whether more based on generalisations or whether more based on accumulated instances). For a detailed, critical review of Prototype Frameworks, Instance-based Frameworks and Semantic-networks, Schema and Script models, see Custers et al. (1996). For an example of the educational repercussions of the distinction between instance-based theories and abstraction-based theories, see Papa et al. (1996). The distinction here is between the expert concept of a disease being internally represented largely in terms of particularly telling example cases, as compared to the disease concept being represented as an abstract generalisation that glosses details of individual cases. Whichever theory is correct, Papa et al. point out that both theories support the finding that more typical cases are more likely to be correctly classified. They suggest that teaching and assessment could be refined to take specific account of the typicality of the cases that they teach/test.

Experts are data driven and don't appear to work directly from scientific first principles so much as from an ``illness script'' (Schmidt et al., 1990) that encapsulates various levels of knowledge (including, at base, the scientific) in a schema (conceptual structure) associated with a particular pathology. When presented with a new case experts rapidly home in on a number of ``critical cues'' (see e.g., Coughlin and Patel, 1987) and key features (for assessment methods based on this, see Page et al., 1995a) that guide them to consider a small set of possible hypotheses (Kushniruk et al., 1998). Discriminating between the hypotheses in this small set is partly dependent on the ``relative distinctiveness of [the] competing classes'' (Papa et al., 1990).

In many ways this rapid homing in process is largely unconscious, though the end result is reflected on and regulated by conscious processes (Boreham, 1994). Experts are also strongly guided by ``enabling conditions'', i.e. crucial factors in the patient data or clinical history. Experts also have an excellent memory for the relevant aspects (Patel et al., 1986) of exceptional individual cases that they have seen and use these in dealing with new cases, though such memories can also be a source of distorted learning (Featherstone, 1984).

Intermediates largely fall between novices and experts, but sometimes perform worse than novices on some laboratory-based tasks (Lesgold et al., 1988), and sometimes perform better than experts on other laboratory-based tasks (i.e. because they have paid attention to different aspects of the case description from experts). However this ``intermediate'' effect is not always found (van de Wiel et al., 1998)

Development of Expertise

The literature in this area is large, but a useful accessible starting point is the paper by Schmidt and Boshuizen (1993). The authors set out a model for the development of expertise that marries together the influence of formal education in basic scientific knowledge as well as practical experience of dealing with cases. They summarise the evidence for a staged account of the development of expertise as opposed to an incremental approach i.e. that what experts know is organized differently from what novices know and is not just a matter of knowing more than novices. In particular, they argue that the a novice's knowledge is re-organized a number of times on the way to expertise. The paper describes the different kinds of knowledge structure that evolve with time and outlines the evidence for the existence of ``illness scripts''. Illness Scripts are stereotypical accounts of the enabling conditions, predisposing factors, special conditions, causation and consequences (e.g. complaints, signs and symptoms) for a particular disease.

The authors argue for the relatively greater importance of instances of past cases encountered compared to basic scientific knowledge in the expert diagnostic reasoning process, i.e. that it's the experience of dealing with actual cases rather than scientific ``bookwork'' that makes the most difference. While the details of the exact forms of the knowledge of novices, intermediates and experts are beyond the scope of this report, the main issue is that knowledge built up in the later stages through exposure to actual cases has the strongest effect of how new cases are dealt with. Only where this is insufficient do experts fall back on the more basic scientific principles that they were earlier exposed to.

First, ``it assumes that the development of expertise can be described as the progression through a series of transitory phases. Second, knowledge acquired during different phases of expertise development has a distinctly different organization, earlier forms tending to be organized in causal networks, more recent forms being structured as scripts. Third, it is assumed that knowledge acquired in different phases form layers in memory through a sedimentation process. Fourth, these knowledge sediments, although usually not applied any more in subsequent phases in the development of expertise, remain available for use when more recently acquired structures fail in producing an adequate representation of a clinical problem. Fifth, episodic traces of clinical problems previously analyzed seem to be extensively used in the representation and solution of new cases.''
(Schmidt and Boshuizen, 1993, pages 217-218)

The above studies can, in principle, be criticised on two grounds. First, their conception of the nature of medical expertise is too narrow and focuses on decision-making to the exclusion of other aspects of patient management and care. Second, even within the area of decision-making the focus is on idealized, laboratory-based decision-making from descriptions of cases rather than real decision-making associated with actual cases as conducted in working medical settings. For example, the decision-making is affected by the context in which it occurs, and a skill learned in one context does not necessarily transfer readily to a different context (see e.g., Gruppen, 1997). However it turns out (see below) that the analysis of expertise in terms of critical cues, small worlds and illness scripts is not that far removed from analyses based in the naturalistic decision-making paradigm. Indeed, Patel et al. (1995) provide a detailed account that tries to reconcile studies of medical knowledge and expertise (such as cited in this section) with a more ``situated'' view that recognizes the important role of context, of artifacts (such as decision-support systems) and of the collaborative nature of much medical decision making.

The following three short sections focus on expertise with respect to three special areas of skill -- the perceptual (e.g. radiology), the psychomotor (e.g. surgery), and the communicative (all aspects of medicine).

3.1.2 Perceptual Skills

Most areas of medicine depend crucially on the doctor seeing what needs to be seen. However, while decision-making has been relatively well researched, medical perception has received rather less attention. An exception is radiology.

Norman et al. (1992) review 46 studies in the area of visual diagnosis. They examine inter alia the effects of prior and concurrent information on radiologic diagnosis, the interplay between perception and analysis, and the question ``are good diagnosticians born or made?''. For example, on the issue of prior and concurrent information, they describe studies that showed that providing a tentative diagnosis increased true-positive rates of detection with a small increase in false-positive (but see below for studies that post-date this review and which indicate the dangers of prior information). They offer the following conclusion about the educational implications:

``If one simply accepts that visual diagnosis does have two identifiable, although not entirely separable, components, it is evident that educational strategies directed at perception and cognition are very different. Perceptual skill is unlikely to be enhanced by any elaboration of rules or high-level processing of features lists or causal mechanisms, although this may well enhance cognitive processing. Rather, perception, with its rapid and gestaltist aspects, is only likely to improve from exposure to many carefully chosen prototypical examples and variations on the same theme.'' (Norman et al., 1992, page S82)

In general the literature on radiological decision-making largely concurs with the decision-making literature described earlier but it also takes special account of perceptual processes and the way they interact with problem-solving (Rogers, 1996). One issue is the way that some aspects of perceptual skill may be acquired largely unconsciously and in a way that does not require their verbal articulation (Lewicki et al., 1988).

With respect to radiology novices are slower and less efficient in scanning images (Nodine et al., 1996), are less able to identify the 3D position of abnormalities and less able to identify the physical extent of the abnormality.

Expert radiologists are able to identify much of the abnormality in an image very quickly (an initial gestalt view) and this is followed by a more deliberative perceptual analysis. They are better at identifying the 3D position of the image (i.e. responding to ``localisation cues'') and also better at identifying the physical extent of the abnormality (Lesgold et al., 1988). Experts have a better appreciation for the range of normality and have a propensity to pay attention and to recall abnormal cases better than normal ones (Myles-Worsley et al., 1988). Raufaste et al. (1998) distinguish ``super-expert'' from ``expert'' radiologists. In particular they found that super-experts were more alert to less salient factors in the images and more likely to construct an understanding that took all the factors into account. Raufaste et al. suggest that this is partly due to their super-experts being exposed to more difficult cases and also because, as researchers, they would be used to devoting ``conscious effort making their results explicit and publishing in scientific journals'' (page 539).

All groups are sensitive to the effect of other information about the patient on what they see and the way in which they integrate visual and written information (Norman et al., 1996a,b).

Consulting information about the patient prior to viewing the images affects not only what they see but also what they diagnose and therefore recommend (Babcook et al., 1993). A similar effect is also reported for ECG interpretation (Hatala et al., 1996).

Development of Radiological Expertise

Lesgold et al. (1988) is a useful paper in this area. The authors describe a series of experiments in which they showed standard posterior-anterior thoracic radiographs to novices, intermediates and experts and invited them to describe what they saw. Their account provides both quantitative and qualitative data, and examines the interplay between cognitive and perceptual processing.

They note:

  • the speed and accuracy with which experts build a mental representation of the abnormal anatomy shown in the radiographs.

  • the speed with which experts invoke a likely schema to explain what they are seeing, and the way that such a schema subsequently guides both their perception and their reasoning.

  • the flexibility that experts exhibit to fine tune a schema to make it fit the findings as well as their ability to make finer discriminations than novices.

  • the fact that experts ``see things differently'' from novices; for example, experts regarded, and traced as abnormal, a larger area from a collapsed lung film than either novices or intermediates.

  • experts reasoned ``opportunistically'' incorporating new pieces of data into their diagnostic decisions.
Lesgold et al. (1988)

3.1.3 Psychomotor Skills

As with perceptual skills, effective hand-eye coordination and the ability to carry out procedures is of great importance all across the field. For an extensive theoretical review of the issues of learning and retention of motor skills in general, see Adams (1987). Here we will concentrate on surgical skills. For a much more domain-specific account of the problems of learning and teaching surgical skills (including the use of `Craft Workshops'), see Kirk (1996).

The evolution of surgical skills, where they are different from the decision-making skills already discussed, has not received anything like the same degree of attention. So, for example, we tend not to find studies contrasting experts and novices.

Barnes (1987) provides a useful overview of some of the issues underpinning the development and the teaching of surgical skills. He mentions (by reference to Kopta, 1971) determinants of surgical skills including ``speed, accuracy, economy of effort, and adaptability''. Pointing to Lippert and Farmer (1984), he reminds us that psychomotor development has to take place alongside the cognitive and affective. He also characterises the surgeon as a problem-solver who in the preoperative phase must develop ``a systematic technique for selecting the most appropriate procedure''(page 424), and in the postoperative phase must develop the ability to reflect on practice effectively. The literature on ``naturalistic decision-making'' (see below) characterises expertise in terms of problem recognition, and gives support to the idea of the expert (e.g. a surgeon) as someone who anticipates and plans for potential problems, rather than simply reacting to them when they occur, see e.g., Xiao et al. (1997) in the field of anesthesiology.

By reference to the literature on motor skills development in sports education, Barnes, who characterises surgery as the ``the ultimate body contact sport'', emphasises two principles.

The first principle is to prevent learners developing faulty initial habits which are then very hard to unlearn. Given that much is learned in apprenticeship mode, poor role models can have a crucial effect.

The second principle is that ``skill retention correlates with the level of initial proficiency and not with practice''. While this cannot be used as an argument against practice, it does emphasise that getting the skill right early in training is very important. Indeed surgeons do get better with practice (see e.g., Blackwell et al., 1997), but this is hardly surprising.

Barnes goes on to argue for and to describe a number of microsurgical training laboratories through which guided supervision and practice can be provided.

An example of a non-surgical focus on psychomotor skills is provided by Kovacs (1997) who describes a methodology (in the area of Advanced Trauma Life Support -- ATLS) for developing procedure skills that has the following stages:

  1. Conceptualization
  2. Visualization
  3. Verbalization
  4. Practice-subcomponent, linkage, continuous
  5. Correction and Reinforcement
  6. Skill Mastery
  7. Autonomy

He emphasises the key point, in relation to item 5, that:

``...knowledge of results is required to learn, correct, and improve the performance of motor action. This principle is frequently violated in medicine where so many procedures performed by house staff go unobserved.'' (our emphasis) (Kovacs, 1997, page 389)

We return to this issue of supervision and feedback in Chapter 4 on Learning in Clinical Settings.

A similar focus on initial accuracy followed by later practice for speed is described by Smith et al. (1997) in their study of learning curves for fibreoptic nasotracheal intubation. They found that following initial training, it took on average 18 practices (under instruction) with real patients to reach a 70% criterion of completing the intubation in less than a minute. Not all procedures have to be practiced with real patients, as is shown by Derossis et al. (1998a) using a laparoscopic simulator, and more generally by Heppell et al. (1995) in their review of ten years experience of a psychomotor skills laboratory.

3.1.4 Communicative Skills

The significance of communication in different zones of medical activity was discussed in Chapter 2. Two groups have been given most attention in the literature: medical students, though not with the priority many authors would like; and General Practitioners, for whom it has been a major concern. Theoretical discussion about communication has largely focused on Medical Interviews in GP surgeries or clinics, except for the specialist area of psychiatry which we are not attempting to tackle. From a general medical perspective, Hampton et al. (1975) showed that in 82% of general practice consultations the diagnosis reached could have been made on the basis of history taking alone, without either a physical examination or any laboratory test. Hence communication skill was required for eliciting most of the important evidence. Balint (1957) suggested that most problems have a psychological element which needs to be explored; and that this was the most significant aspect of at least 25% of cases. Given that even physical symptoms of importance for diagnosis may not be volunteered in the early stages of a consultation, Balint's notion of a ``deeper diagnosis'' has acquired some credibility, and this has led to the use of a continuum from doctor-centred to patient-centred as perhaps the main dimension for analysing GP consultations. Byrne and Long's (1976) study of over 2000 GP consultations used a 7 point continuum and identified about 20 doctor-centred, 20 patient-centred and 8 negative behaviours as a framework for analysing interviews. They also distinguished six phases to the interview (see box below) and showed that consultations were particularly likely to go wrong if there were shortcomings in Phase 2 or Phase 4. Their conclusion was that a more patient-centred approach was more likely to elicit important information about the psychological and physical symptoms:

Doctor-Patient Consultation

  1. The doctor establishes a relationship with the patient.
  2. The doctor attempts to discover, or actually discovers, the reasons for the patient's attendance.
  3. The doctor conducts a verbal or physical examination, or both.
  4. The doctor, or the doctor and the patient, or the patient (in that order of probability) consider the condition.
  5. The doctor, and occasionally the patient, details treatment, or further investigation.
  6. The consultation is terminated, usually by the doctor.


Byrne and Long's (1976) six phases of a doctor-patient consultation (as described by Pendleton et al., 1984).

Sociologists have drawn attention to the role expectations of doctors and patients which facilitate or constrain communication, including the effects of social class (Bain, 1976,1977; Pendleton and Bochner, 1980); Kleinman (1980) noted ritualistic parallels between traditional and `Western' healers. Several anthropologists have distinguished between diseases as labels given by doctors, and illness as a broader concept defined in terms of the patient (Helman, 1981). This includes the response of the patient to a problem, how it affects the patient's behaviour or relationship, the patient's past experiences of illness and the meaning she gives to that experience (Pendleton et al., 1984). Levenstein et al. (1986) rightly emphasise integrating a patient-centred approach with the doctor's concern for differential diagnosis, as a purely Rogerian approach does not necessarily elicit more relevant information (Bensing and Sluijs, 1985).

Hence the view that the doctor's understanding of the problem must include an appreciation of the patient's own understanding, whether or not she perceives it to be accurate. Without such understanding patients are unlikely to understand what the doctor tells them or comply with this advice (Becker, 1979). This Health Belief Model, described in detail by Pendleton et al. (1984), is also much used in Preventive Medicine.

Empirical evidence in support of these theories is reviewed by Pendleton et al. (1984) who concluded that:

``Patients are more satisfied when the doctor discovers and deals with patients concerns and expectations; communicates warmth, interest, and concern about the patient, volunteers a lot of information; and explains matters to the patient in terms that are understood.''

A comprehensive study by Bertakis et al. (1991) of 550 consultations by 127 physicians in the US (90% in internal medicine, 35% residents) concluded that patients were most satisfied by interviews that ``encourage them to talk about psychosocial issues in an atmosphere that is characterised by the absence of physicians dominance''. Becker's (1979) Becker's (1979) review of compliance demonstrated that patients comply better when they believe they can have control over their health and when the advice given is consistent with their own health beliefs; and Korsch and Negrete (1972) showed that mothers leaving a paediatric clinic ``highly satisfied'' (40%) were three times more likely to follow the doctor's advice fully than those who were ``highly dissatisfied'' (13%). The main reasons for dissatisfaction were unfriendly behaviour, and the lack of information about the nature or cause of their child's illness. At a more general level, Fletcher and Freeling's (1988) review also concludes that most patients want more information than they are given.

The Toronto consensus statement issued by a meeting of researchers in this field (Simpson et al., 1991) reviewed evidence about the significance of doctor-patient communication, then advised on the teaching of communication skills as follows:

``To become effective communicators, physicians must master a defined body of knowledge, skills, and attitudes. Clinical communication skills do not reliably improve from mere experience. Examples of relevant areas of knowledge are psychiatry in relation to medicine (for example, diagnostic clues to depression, anxiety, somatisation problems) and the structure and functions of medical interview are those of data gathering, forming and maintaining relationships, dealing with difficult issues (such as sexual history, breaking bad news, HIV), and imparting information; therapeutic skills and strategies are also necessary. These skills can be defined with behavioural criteria and can be reliably taught and assessed. Helpful attitudes include a belief in the importance of a biopsychosocial perspective. A physician's personal growth and self awareness are essential bases of effective communication.'' (Simpson et al., 1991, pages 1385-6)

Cognitive science concepts similar to those used in theories of diagnostic expertise have also been applied to doctor-patient consultations. Tannen and Wallat (1987,1986) analyse transcripts in detail, but instead of using the wide repertoire of behaviours identified by Byrne and Long, they use the concepts of `register', `frame', `schema' and `script'. Their data set is a series of videotaped conversations in five different settings, involving various family members and medical professionals in a single paediatric case of cerebral palsy. The videotapes were also intended for training paediatric residents. In one social encounter involving a paediatrician, mother and child, the following phenomena were noted:

Having such a script is probably what enabled the paediatrician to navigate her way through this complex situation, though it also brought disadvantages. Later they decided that it would be better for parents to watch through a one-way mirror while the child is being examined, thus separating over time the examination and the consultation. Kinderman and Humphris (1995) have explored the practical implications of the overt incorporation of cognitive schemata and script development into the teaching of communication skills.

There is also evidence that doctors and dentists may use their expertise and other conversational methods to persuade patients to accept treatments favoured by themselves thus, not necessarily with intention, diminishing the patient's power to choose. This complex self-awareness problem is most easily handled by analysis of recorded conversation in a safe context. Balint (1957) refers to it as the ``apostolic function'' of the doctor, and Anderson (1986) has produced similar evidence for dentists.

Another aspect of communication noted but not deeply studied among doctors or health care groups is teamwork. However, there in an extensive non-medical literature on team performance and training (Swezey and Salas, 1992) which needs addressing. One conclusion, for example, is that individual skill training will enhance overall team performance in situations of low task complexity; but in situations of high task complexity and high task organisation effective communication and coordination among team members are vitally important (Salas et al., 1992).

3.2 How Experts Do Decision Making

A different branch of the expertise literature, Naturalistic Decision Making, has focussed its attention on what experts actually do when they work. Much is made of the difference in emphasis compared to the laboratory studies described earlier. Many of the studies focus on military and industrial settings (e.g. the control of complex processes such as in power stations), but a number have also examined practical medical decision-making (e.g. in anaesthesiology).

``It is therefore necessary to move away from the traditional view of diagnosis as a cognitive problem-solving activity which establishes a complete scientific explanation of the cause of the malfunction before the operator chooses a course of action. Instead, a theory of diagnosis must be a theory of optimal behaviour, in which some control actions may be performed prior to information gathering, and information gathering may be curtailed before a complete picture has been established in the interest of maximizing utility.'' (Hoc et al., 1995) (page 22)

The issues which distinguish this approach to understanding expertise are summarised by (Zsambok and Klein, 1997, page 5) as:

Much of literature is concerned with the issue that experts have become attuned to the kinds of problem that they are likely to face and have developed strategies to deal with them. The issue then becomes one of recognizing the nature of the problem (``Recognition-Primed Decision Makeing'', Klein, 1989) It also involves monitoring the effectiveness of a largely pre-formulated way of dealing with it. Pre-formulation arises from the need to save time and make rapid decisions (not necessarily under stress). In a way this is reminiscent of the ``illness script'' mentioned above, but it also brings into play the dynamic factors in the above list (e.g. time stress).

Lipshitz (1993) provides an overview of various emerging theories of the decision-making of expert individuals in naturalistic settings. He identifies a number of common themes.

Dynamic Decision Making

A useful entry point to this literature from a medical point of view is the paper by Gaba (1992) who describes the issue of dynamic decision-making in the field of anesthesiology. He stresses the dynamic issues: (i) ``The pace of decision is determined externally ... events may occur frequently ...Some events cannot be avoided''; (ii) ``The system is complicated and has many interconnected parts''; (iii) ``There is uncertainty''. Signals from instruments have to be interpreted and may be either weak or unreliable; and (iv) ``There is risk''. These factors underline the stressful nature of this work and the need for ways of assessing how decision-making in this area is affected by stress (Byrne et al., 1998; Byrne and Jones, 1997).

Gaba, like Lipshitz above, reviews the work of the main investigators in the field. For example, in reviewing the work of Woods (1990), he mentions dynamic decision-making biases such as (i) ``cognitive tunnel vision'' (where new data are coerced to fit a pre-existing and incorrect view of the situation); (ii) attending to surface issues rather than engaging with the underlying problem; and (iii) `micawberism' -- believing that everything will work out OK in the end, despite all the contra-indications.

Gaba goes on to describe some of his own experiments in the area and outlines a model of dynamic decision making. In simulator experiments (in realistic settings with realistic instrumentation, but using an intubation/thorax mannequin), he found large variations in performance among subjects and large variations across incidents. In agreement with the literature, he found that experts were better at anticipating problems and were more willing to ``interact forcefully'' with the surgeon.

Finally, Gaba argues that anaesthetists ought to be trained explicitly in crisis management in a way similar to the training given to pilots. That is, the training should explicitly address the way that stress, risk, complexity and lack of time can lead to decision-making biases, such as cognitive tunnel vision, and should train anesthetists in strategies to combat these biases.

(Gaba, 1992)

At a much more applied level, Bognor (1997) offers a brief introductory account, from an American perspective, of this approach to understanding the pressures on decision makers in the area of healthcare. She reviews work that has examined time pressure, fatigue and stress as they impact on emergency surgery and anesthesiology: for example, an increase under stress in the reliance on technology by anesthesiologists. Other factors considered are the pressure to reduce costs, the variability in the quality of feedback and the interface with the many (complex) technologies in use, and the problems of shared responsibility (e.g. with surgeons) and communication amongst the team associated with the case.

3.2.1 The Sense of Expertise

Given the pressures under which medical expertise is typically exercised, it is important that experts develop self-confidence and a ``sense of equanimity'' (Ytterberg et al., 1998). For example, Rhoton et al. (1991) found that non-cognitive factors (such as conscientiousness and confidence) were strongly correlated with overall performance in anesthesiology departments. They urge that educators ``reconsider the lack of emphasis historically placed on the noncognitive aspects of performance'' (page 361). Ytterberg et al. found that giving students a chance to practice skills (such as history taking) via an assessment using stations in a situation where the students knew that they would not be `failed' improved their self-confidence in the skills assessed. They also found that the sense of self-confidence was well-founded in that it was correlated with scores in the assessment. The educational lesson from this is that this kind of assessment instrument can be used to provide feedback to students about their progress, and knowing that you are making progress improves self-confidence.

Tracking changes in self-confidence over longer time periods is more problematic. Sim et al. (1996) compared responses using a critical incident technique between 18 doctors in their first 6 months of general practice with those same doctors 12 to 18 months later as they completed their advanced or mentor terms in the Royal Australian College of General Practitioners Training Program. In the interviews ``doctors were asked to describe incidents and to identify skills either present or lacking, feelings and lessons learned''. The researchers found that

``increasing clinical practice ...resulted in an increase in the positive feelings associated with making a difficult diagnosis and dealing appropriately with more difficult management problems without immediate referral to a specialist.

...

Doctors in the first interviews also reported feeling pressured for time, uncertain or anxious about possible missed diagnoses and inadequate management, and unsupported by supervisors and practice staff. Hardly any of these issues were mentioned in the follow up interview. Although doctors' levels of anxiety appeared less, they more frequently reported feelings of guilt over missed diagnoses or less than perfect management. With increased maturity the doctors appeared to become more `tuned in' to the subtle interpersonal issues of the consultation and were more self critical regarding those episodes of miscommunication which result in a less than ideal consultation.'' (Sim et al., 1996, page S64)

3.3 What Experts Ought to Do

There are two overlapping approaches to the search for better and more consistent clinical decision-making. The first tends to focus on quantitative models of decision-making: for example, weighing evidence, estimating probabilities of outcomes and computing utility values. This approach naturally leads to the development of systematic (typically computer-based) methodologies to which the doctor can apply information and from which he or she can obtain advice that can be weighed against other evidence and test results. See the collection edited by Tan and Sheps (1998) for a general overview of this approach.

A second approach focuses rather more on the provision of the best available evidence to assist doctors in their decision-making. Information Technology is involved here much more in its role as a repository and facilitator of access to information, than as a device to weigh information.

3.3.1 Clinical Decision Analysis

Decision Analysis

An excellent introduction to decision analysis is provided by Lilford and Thornton (1992). They distinguish structured patient history taking from diagnostic systems and discuss the value of, and lack of general acceptability, of expert systems. They provide several examples of how to carry out a clinical decision analysis in terms of the treatment with the `greatest expected utility'. They also are frank about the difficulties of carrying out a fully rigorous decision analysis in dealing with an individual patient. However they argue for its value in determining general treatment policy, in focusing research and in dealing with issues of communities as well as individuals.

Lilford and Thornton (1992)

Dowie (1993), in a book edited by Llewelyn and Hopkins (1993), distinguishes between a number of questions that can be asked about clinical judgement and decision-making:

  1. How are clinical judgements and decisions made? (To some extent this question is addressed in earlier parts of this chapter.)
  2. How well are clinical judgements and decisions made?
  3. How could they be made?
  4. How well could they be made?
  5. How should they be made?

Dowie concentrates on question 4, and reminds us that many studies have shown that in the area of clinical decision-making `we could do better'. He goes on to argue that one route towards improvement is the development and use of system-aided judgement and decision-making.

In support of this view de Dombal (1993) reviews a number of studies that have demonstrated improved clinical decision-making performance, including a study that ``showed that when findings of detailed studies were made available to inexperienced staff performance improved in a number of hospitals'' (emphasis ours). For example, initial diagnostic accuracy improved from 45.6% to 65.3% and post-investigative diagnostic accuracy improved from 57.9% to 74.2%. It has to be mentioned that this result is much more positive than that of Elstein et al. (1996), described in Chapter 7 on Information Technology.

The book edited by Llewelyn and Hopkins (1993) gives examples of how decision trees can be constructed, including methods to assess (numerically) the probability of every choice branch and methods to assess (numerically) the utility of each outcome state, as well as how to carry out a sensitivity analysis of the effect of varying the judgements. For instance, Hoellerich and Wigton (1986) provide a detailed example of the development of a decision rule for diagnosing pulmonary embolism using clinical findings.

A more detailed account of models of clinical decision-making is provided by Dowie and Elstein (1988). This focuses on rational decision-making under uncertainty and its theoretical underpinnings. They include research on statistical approaches, including some which claim that in some decision-making contexts simple statistical models perform better than experts. They acknowledge that the relevance of these results to clinical decision-making tasks in realistic contexts is disputed:

``Clinicians have great difficulty accepting that their expensive and complex technical knowledge does not necessarily guarantee that they can do better, in most clinical cases that come to them, than something as simple as ``add up how many cues are in favour of each possible judgement and go with the highest score''.'' (Dowie and Elstein, 1988, page 10)

Of course, part of the issue resides in the crucial word ``most'', in the sense that dealing as effectively with the unusual as with the usual, and knowing the difference between the two, marks out the difference between a simple rule of thumb and a more considered decision procedure.

The work of Moskowitz et al. (1988) provides a direct empirical comparison between decision making of experts (using think-aloud protocols based on case descriptions) and decisions taken according decision analysis theory. They observed that

``The experts did not formulate a global outline of their decision, but chained together a sequence of decisions based on available and incomplete information. Despite effective and efficient problem solving, the clinicians used numeric terms only as symbolic representations of likelihood, used limited information in choosing amongst alternatives, and dismissed the possibility that a less conventional strategy, empiric therapy, might yield equivalent outcome.'' (Moskowitz et al., 1988, page 435)

It is argued that doctors often fail to carry out their decision-making in the hypothetico-deductive manner prescribed by these theories and use less rigorous strategies (Magnani, 1992) such as exploiting circumstantial evidence (see e.g., Boreham et al., 1996), or heuristics (see e.g., Boreham, 1989). In this case the question remains as to how decision support aids, that do work on a rigorous basis, can be exploited effectively by those who have in the end to make, and bear the responsibility for, clinical decisions. Issues here include the degree to which the existence of the decision-making aid with its own particular methods and foci for information gathering about the case adds its own biases to the decision-making process (see e.g., Kushniruk and Patel, 1998).

3.3.2 Evidence-Based Medicine

``Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.'' (Sackett et al., 1996).

Evidence-Based Medicine

Sackett et al. (1996) are at pains to point out that evidence-based medicine is not a ``cook-book'' approach but involves each doctor developing a personal methodology for seeking out and adopting best practice through a lifelong process of continuing medical education. However it is clear that organizational and resource policies can help or hinder this approach, for example collating databases and vouching for the quality of their contents, such as the Cochrane Database of Systematic reviews (CDSR). It clearly makes sense to make it easier for doctors to access the information they need at the point at which they need it, although recognizing that their ability to extract, judge and utilize this information will depend on their degree of experience -- hence it not being a ``cook-book'' approach.

As developed by Sackett et al. (1996), evidence-based medicine is a methodology for good decision-making practice. As such their book contains chapters on ``How to ask clinical questions you can answer'', ``Searching for the best evidence'', ``Critically appraising the evidence'', how the doctor can ``apply this valid, important evidence in caring for your patient'' and self-evaluation. Each chapter also has a section devoted to the teachers of such topics as well as to the learners of them. Given the huge volume of medical research available, its not surprising that the chapter on critically appraising the evidence is the longest in the book.

Sackett et al. (1996)

The contrast in paradigm between the old and the new paradigm is brought out clearly in Evidence-Based Medicine Working Group: McMaster University (1992). They characterise the evidence-based medicine paradigm as follows:

  1. ``Clinical experience and the development of clinical instincts (particularly with respect to diagnosis) are a crucial and necessary part of becoming a competent physician. Many aspects of clinical practice cannot, or will not, ever be adequately tested. Clinical experience and its lessons are particularly important in these situations. At the same time, systematic attempts to record observations in a reproducible and unbiased fashion markedly increase the confidence one can have in knowledge about patient prognosis, the value of diagnostic tests, and the efficacy of treatment. In the absence of systematic observation one must be cautious in the interpretation of information derived from clinical experience and intuition, for it may at times be misleading.

  2. The study and understanding of basic mechanisms of disease are necessary but insufficient guides for clinical practice. The rationales for diagnosis and treatment, which follow from basic pathophysiologic principles, may in fact be incorrect, leading to inaccurate predictions about performance of diagnostic tests and the efficacy of treatments.

  3. Understanding certain rules of evidence is necessary to correctly interpret literature on causation, prognosis, diagnostic tests, and treatment strategy.''
(Evidence-Based Medicine Working Group: McMaster University, 1992, page 2421)

The authors go on to describe how the internal medicine program at McMaster University has designed its programme to teach these principles (largely conforming to the rallying call of Bulger, 1993). They correct various misinterpretations of evidence-based medicine, e.g. that it ignores clinical experience and clinical intuition. They describe barriers to teaching the approach (e.g. ``for many clinical questions high quality evidence is lacking''), and barriers to practicing the approach (e.g. ``economic constraints and counterproductive incentives''). In support of the former issue, Poses et al. (1997) suggest that, in relation to judgements of the risks of invasive cardiac procedures between generalists and specialists, one possible explanation is that ``Disagreements about the risks of procedures may arise from the paucity of published data or from an oversupply of confusing, contradictory data.'' Among other issues of support, they point to the need for effective computer-based technology. The consequential increasing reliance on electronic forms of information storage and retrieval (including patient records), raises issues about the effective design of systems for use by doctors in their decision-making role (see e.g., Elson et al., 1997).


3.4 The Role of Academic Knowledge

Perhaps the least explored of the new approaches concerns the issues of how theoretical knowledge is used in practical situations. The use of theoretical knowledge appears to be largely a tacit process, but it is unclear whether this lack of clarification is the result of researchers' neglect or some inherent property. There is some evidence that use of theoretical knowledge increases at the consultant level; but could the delay be shortened by teaching specifically aimed at linking theory with practice, as suggested by Eraut et al. (1995)? There is also evidence that theoretical knowledge is more deeply ``encapsulated'' in experts and is only used when other methods fail (see e.g., Schmidt and Boshuizen, 1993). For example, van Leeuwen et al. (1995) found that there was a fall-off in scores of general practitioners on a written knowledge test from a highpoint 5-10 years after certification, except in the area of chronic illness. Their study was a cross-sectional rather than a longitudinal one, and the results are hard to interpret reliably. It might be that the test favoured the more recently qualified in the kinds of knowledge it was investigating. It could also be that the knowledge tested was more deeply ``encapsulated'' in the more experienced GPs.

The term theoretical knowledge is used here to refer to propositional knowledge in the natural and social sciences which can be found in textbooks and the research literature. This constitutes the central core of what Eraut (1997) describes as Type A knowledge -- public knowledge which is (1) subject to quality control by editors, peer review and debate and (2) given status by incorporation into educational programmes, examinations and qualifications. Type A knowledge includes propositions about skilled behaviour, but not the skills themselves. It is confined to knowing that and excludes knowing how. Professional knowledge, however, is defined as a form of Type B knowledge -- personal knowledge which is categorised by the context and manner of its use, rather than its source or epistemological status. It is that knowledge which professionals bring to their practice that enables them to think and perform on-the-job. Thus Type B knowledge incorporates not only propositional knowledge (in the form in which it is used) but also procedural and process knowledge, tacit knowledge and experiential knowledge in episodic memory. Skills are treated as part of that knowledge, thus allowing representations of competence, capability or expertise in which the use of skills and propositional knowledge are closely integrated.

3.4.1 Transfer is a learning process

The technical terms most frequently associated with this process are ``transfer of knowledge'' and ``application of theory''. The implicit assumption is that one carries scientific knowledge across from an education context to a practice context, then simply applies it. Yet there is a large body of evidence (1) that knowledge may not be carried across by many professionals, because they do not recognise or appreciate its relevance to a particular case or problem, and (2) that they may not apply it because they do not know how to do so. Instead of seeing transfer as an event in which a person suddenly becomes able to apply knowledge acquired in one context to a second, different context, we have to see it as a learning process in which a person not only carries knowledge from one context to another but learns how to apply that knowledge in the new context. Knowledge is acquired in a particular context and remains situated in that context until it can be transformed and resituated in another context; and the extent of this further learning will depend both on the degree of difference between the two contexts and on that person's preparedness and prior experience of successful transfer. Even when transfer involves a sudden flash of insight, considerable learning effort may be needed to convert that insight into usable knowledge. However, programmes for professional formation seldom recognise the learning effort required for the transfer of knowledge. Support for transfer is rarely provided, even though trainee and novice professionals are ill-prepared to tackle it on their own.

Transfer of knowledge is most difficult when the contexts and modes of learning are very different. For example, moving from a university context to a professional practice context involves changing from an environment in which Type A knowledge is dominant to one in which Type B knowledge is dominant. Moreover, resituation of scientific knowledge will require not only transformation of that knowledge but also gaining sufficient understanding of the new context to know what kind of transformation is needed. This problem is further exacerbated by the likelihood that other areas of scientific knowledge and other forms of professional knowledge will also be needed for appropriate practical action; and these different areas and forms of knowledge will have to be integrated somehow. This is a more complex and challenging problem than learning to use scientific knowledge in well-defined situations in familiar academic contexts.

Attempting to deal with the problem of transfer has been one of the motivating forces behind the development of problem-based curricula.

3.4.2 Problem-based learning

Patel and her colleagues have explored the role of basic science in the curriculum of medical schools. They characterise two broad approaches -- the ``science-oriented, in which basic science is taught independently of the more clinical aspects of medicine'' and the ``clinically oriented, in which basic science is taught within some appropriate clinical context'' (Evans and Patel, 1989, page 53). They report a series of experiments (see e.g., Patel et al., 1986,1988) in which subjects of varying medical expertise (from first year students to expert researchers) had to read and explain a description of a medical case.

They found that ``the only group that shows reliance on basic science is the intermediate-novice group. Experts and advanced-novices use clinical-situation models, which can be elaborated to accommodate basic science; though the novices are less selective in their application of basic-science knowledge. Beginning-novices cannot use basic-science information.'' (page 105).

In their view basic science and clinical science are not related hierarchically, with basic science the more fundamental and offering explanations at a finer level of granularity. The two approaches are to be regarded as separate domains, providing different kinds of explanations and, as sciences, providing different kinds of generalization. In a similar, later study of medical students Patel et al. (1990) found that basic science was utilized more effectively as a framework within which to construct explanations than as a method for facilitating problem-solving.

A direct comparison of the science-oriented and clinically oriented approaches in two medical schools is provided by Patel et al. (1991). They compared 54 students from McGill -- following a conventional curriculum (CC) where the basic sciences are taught first in separate disciplines -- with 54 students from McMaster -- where there was an established problem-based approach (PBLC). The methodology was as above, namely examining the explanations of clinical cases, read by students in different years of study, to see how they integrated biomedical information. There were two conditions: in one, basic science material was provided prior to the clinical problem; in the other it was provided after the clinical problem.

They found that diagnostic accuracy increased with seniority of the student for both medical schools. In particular

``The PBLC students had learned a systematic process of thinking that was explicitly taught. The predominance of backward reasoning, the systematic use of clinical information, and the tendency to formulate extensive elaborations are all consistent with the notion that the students were generating diagnostic explanations through the use of hypothetico-deductive reasoning. The fact that these patterns appeared with beginners and did not change with level of training supports the notion that they reflected reasoning strategies taught at the beginning level and reinforced throughout the curriculum.

...

Since systematic reasoning is not taught in the conventional curriculum it was somewhat surprising to see a systematic method of reasoning emerge, characterised by a relative prevalence of forward reasoning and a tendency to refrain from extensive elaboration. The CC students also showed a pronounced tendency to explain the case on the basis of a single diagnosis rather than an extensive list of differential diagnoses.'' (Patel et al., 1991, page 387).

Patel et al. also point out that the PBLC students had a tendency to more elaborate explanations, but also to generate errors within those elaborations.

Albanese and Mitchell (1993) cite the above study along with over a hundred others in a detailed meta-analysis of studies (1972-1992) of the effects of problem-based learning. Their overall conclusions are:

``Compared to conventional instruction, PBL, as suggested by the findings, is more nurturing and enjoyable; PBL graduates perform as well, and sometimes better, on clinical examinations and faculty evaluations; and they are more likely to enter family medicine. Further, faculty tend to enjoy teaching using PBL. However, PBL students in a few instances scored lower on basic sciences examinations and viewed themselves as less well prepared in the basic sciences than were their conventionally trained counterparts. PBL graduates tended to engage in backward reasoning rather than the forward reasoning experts engage in, and there appeared to be gaps in their cognitive knowledge base that could affect practice outcomes.'' (Albanese and Mitchell, 1993, page 52)

Because only three of the studies examined by Albanese and Mitchell dealt with performance assessment of graduates, they felt unable to draw firm conclusions but urged that further research was needed in this area.

In a larger and more carefully controlled study than that of Patel et al. (1991), Schmidt et al. (1996) compared 612 Dutch students diagnosing from 30 epidemiologically representative clinical case descriptions. The students were drawn from three Dutch medical schools -- one teaching a conventional curriculum, another taking a problem-based approach and a third a systems approach that ``integrates the biomedical and clinical sciences around major organ systems. Students engage in patient demonstrations and small-group training sessions in which knowledge previously acquired is applied to relevant clinical cases.'' (page 660). Unlike Patel et al. (1991), these students were not exposed to basic science material as part of the study. Schmidt et al. found that in all three schools, students produced monotonically better diagnostic performance by level within the school. They found a significant effect of curriculum type on diagnostic performance, as well as an interaction between curriculum type and year of study. A post-hoc analysis showed that students taught the conventional curriculum performed more poorly than the other two groups (the problem-based and the systems-based).

The authors surmise that the reason for the differences in outcome may be that:

``the problem-based and the integrated curricula [i.e. the systems-based] both offer subject-matter to students in an integrated fashion and that students are encouraged to process the information in an active way through small-group discussion. Thus, subject-matter integration and active processing seem more important factors in attaining proficiency in diagnostic reasoning than the amount of self-directedness in the curriculum. ... In addition, seeing paper patients or their real-life equivalents seems to be important.'' (Schmidt et al., 1996, page 663)

Results conflicting with the above, but using ratings of clinical supervisors, were found in a study of 485 Australian graduates. ``Graduates from the problem-based medical school were rated significantly better than their peers with respect to their interpersonal relations, `reliability', and `self-directed learning'. Interns from one of the two traditional New South Wales medical schools had significantly higher ratings on `teaching', `diagnostic skills' and `understanding of basic mechanisms'.'' (Rolfe et al., 1995, page 225, our emphasis)

How far the different courses or the different methodologies used in the Dutch and the Australian study contribute to their conflicting result on diagnostic skills is difficult to assess.

3.5 Summary

Theories of expertise developed in different contexts using different research techniques may emphasise different aspects but do not greatly differ in their conclusions. Key features include the importance of case-based experience, the rapid retrieval of information from memory attributable to its superior organisation, the development of standard patterns of reasoning and problem-solving, quick recognition of which approach to use and when, awareness of bias and fallibility; and the ability to track down, evaluate and use evidence from research and case-specific data. Understanding the nature of expertise is important for self-monitoring one's use of heuristics and possible bias, sharing knowledge with others and supporting other people's learning. It is also critical for understanding the respective roles of clinical experience and evidence-based guidelines.

---------------------------------------------------------

previous up contents next
Left: 2. Competence and Judgement Up: Developing the Attributes of Right: 4. Learning in Clinical
Benedict du Boulay, DOH Report pages updated on Friday 9 February 2001