previous up contents next
Left: 5. Learning in Non-Clinical Up: Developing the Attributes of Right: 7. The Role of

Subsections

---------------------------------------------------------


6. Continuing Medical Education and Lifelong Learning

An Ontario Survey of physicians (Davis et al., 1983) distinguished between two types of CME activity: (1) those in which physicians were able to participate locally within the community of practice settings, and (2) those of a more formal nature, often requiring travel. Activities reported by over a third of the respondents are listed below:


Table 6.1: Types of CME activity: informal (top), formal (bottom).
Informal, local, community-based CME activity
Reading journals 98.8%
Informal consultants 83.5%
Reading texts 76.0%
Attending rounds 72.9%
Using drug company materials 42.5%
Using AV materials 37.4%


Attendance at formal or distant CME programmes
Scientific sessions 71.0%
Formal hospital events 52.1%
Meetings of local medical societies 44.6%
Medical school CME activities 43.4%
Speakers programmes organised by drug 41.3%
companies or other agencies  


Davis et al. (1994) describe this pattern as still typical ten years later, though a few newer methods were also beginning to feature by this time.

Meanwhile a structured interview study by Owen et al. (1989) of General Practitioners in Wales revealed over 90% reading journals and medical papers and extensive purchase of books for personal use (76%) or the practice library (76%). However ``reading medical literature'' just failed to reach their top five educational activities. The percentages rating educational activities as very valuable (1 on a 1-5 scale) showed a strong preference for informal consultation and discussion:


Table 6.2: Sources of information.
Sources of Information
Contact with partners, such as practice 63%
meetings and discussions  
Contact with patients 43%
Practice meetings with health visitors, 31%
social workers, district nurses  
Postgraduate meetings, courses and symposia 29%
at local hospitals  
Informal hospital input 25%


54% had carried out performance reviews within their practices and 46% held their own educational meetings (33% at least monthly). Their wish for more contact with other groups was demonstrated by 74% being in favour of non-medical members of the primary health care team being involved in CME for GPs, and by 70% favouring joint educational activities with hospital doctors.

Four years later a telephone survey of 111 GPs (response rate 63%) asked respondents to state the most important influence on the development of their practice of medicine (Drage et al., 1994), leading to a very different kind of response (the total of 156% is due to dual responses being double counted):


Table 6.3: Influences on development.
Influences on development
Education events 37% Several factors 8%
Colleagues 29% 1990 contract 6%
Reading 27% Consultant letters 5%
In-practice meetings 24% Other matters 12%
Experience 8%    


The increased influence of educational events can be attributed to greater attendance following the introduction of a postgraduate education allowance (PGEA) into GP Contracts (contentious because it was funded by reducing seniority payments, leaving no net gain in income). However, we may be dealing with a bimodal distribution because, when later asked to identify any particular education event(s) that had changed the way they practice medicine, only 54% were able to respond positively.

Specific questions about five different aspects of change in their practice in the last 3-4 years yielded positive response rates as follows:


Table 6.4: Aspects of change.
Aspects of change
Changes in practice organisation 90%
Changes in health promotion 89%
Changes in treatment (including drugs) 86%
Changes in diagnosis and investigation 65%
Changes in doctor-patient relationship 54%


The first two aspects are linked to new contracts and financial arrangements: 73% attributed changes in health promotion to financial arrangements, though greater awareness (27%) probably affected performance (31% integrating health promotion with consultations and 10% the reverse). Changes in the doctor-patient relationship included an almost equal number of positive and negative responses, the latter usually attributed to deteriorating conditions for work, especially time pressures. The diagnosis and investigation responses largely concerned increases in investigations (ultrasound 18%, gastrocopy 17%, blood investigation 14% were the most common) and referrals (24%), though some decreases were also reported. The main reasons given were improved accessibility (29%) and patient demand (24%). Only the changes in treatment were attributed mainly to CME, as shown below:


Table 6.5: Sources of information leading to changes in treatment.
Sources of Information Leading to Changes in Treatment
(n=95)      
Journals 67% Cost/audit 26%
Educational meetings 43% GP colleagues 21%
Pharmaceutical reps 30% Patient pressure 6%
Local consultants 28% Other sources 19%


Specific treatments changes mentioned by 10 or more doctors included ACE inhibitors (54%), treatment of asthma (24%), anti-depressants (17%), antibiotics (17%) and treatment of GI disease (10%).

A more recent Welsh study by Allery et al. (1997) interviewed a random sample of 50 general practitioners and 50 consultants (response rate 77%) about specific changes they had made in the preceding year in four key areas of practice: management of a common clinical condition, prescribing, referral and use of investigations. Reasons for making these changes were then elicited and classified. Each group provided an average of 3.6 examples with GPs giving about 3.2 reasons for each change and consultants 2.8 reasons. The distribution of these reasons between eight most cited categories is given below for type of doctor and category of change.


Table 6.6: Citations by category.
Type of Reason Percentage of citations for each category
  GPs Cons Manage Prescribe Refer Investigate
Organisation 19 17 18 13 12 9
Education 14 21 24 20 5 16
Contact with professionals 14 12 9 13 14 16
Patient-centred 11 8 12 7 12 8
Technology/tests 6 14 4 0 4 28
Economic 11 6 6 22 2 3
Pharmacology/            
pharm. companies 8 9 13 18 2 0
Clinical experience 9 6 7 5 12 9


GPs were twice as likely to mention cost factors; and consultants were twice as likely to mention changes in technology/tests. Patient-centred changes were more likely to be ``patient led'' for GPs and ``patient need'' related for consultants. Professional contacts were equally divided for GPs between consultants and other GPs, while 72% of consultants' contacts were with other consultants: a total of 14% of these contacts were with non-medical professionals. The breakdown of reasons within the ``education'' category was as follows:


Table 6.7: Reasons for citation within the ``education'' category.
Educational category Number of reasons cited
  GPs Consultants
Scientific or medical journals 13 36
Medical newspapers 17 0
General press 5 3
Other/unspecified literature 4 21
Attendance at scientific meetings/conferences 1 20
Postgrad/clinical meetings, GP refresher courses 25 8
Supervision of trainees 1 0
Research 4 6
Audit 4 6
Disease management protocols/guidelines 6 2
TOTAL 80 102


Davis et al.'s (1995) review of randomised controlled trials in CME found 99 such trials completed by the end of 1994, incorporating both primary and secondary interventions. A third of these trials included residents in the target population; three quarters took place in ambulatory settings -- private offices, care centres or clinics. 62% showed an improvement in at least one major outcome related to either physician performance or health care. Given the low criterion and the likelihood that only the more carefully planned activities are likely to have been submitted to RCTs, this percentage is remarkably low. However, 79% of the interventions using three or more strategies showed positive outcomes.

Wensing et al. (1998) review of research on implementing guidelines and innovations in general practice confirms the effectiveness of multi-faceted interventions; and also noted that ``many ineffective interventions involved the dissemination of educational materials or the provision of a short education programme'' (page 963). The likelihood of positive outcomes is also increased by management support or the ``social influence'' of well-respected colleagues a finding consistent with other literature on the dissemination of innovations (Rogers, 1995b). The cultural significance of hospital-based interventions being targeted at GPs and being rejected as inappropriately didactic also has to be considered (Singleton and Tylee, 1996). Our discussion will begin with evidence on the effectiveness of CME courses then continue with a further section on other types of intervention.

6.1 Effectiveness of CME Courses

Davis et al. (1995) also concluded that short (1 day or less) CME events usually bring about little change. Although there are a few examples of very short courses focused on simple practical skills leading to positive outcomes -- Awh et al. (1991), for diabetic retinopathy (4 hours), and Donnelly et al. (1996), for oroscopy (2 hours) -- most reported successes are of courses longer than 1 day. Some use pre-test/post-test evaluations rather than control groups. But in the absence of plausible alternative explanations for changes in physicians' competence it is reasonable to attribute these outcomes to CME events. Otherwise the RCT criterion would exclude a significant amount of evidence on the appropriateness of educational methods and the length of CME interventions. Four recent studies with GPs help to illustrate these and other issues.

Tissier and Rink (1996) report that simulated tissue can be an adequate substitute when teaching minor surgical procedures, even though it is not very realistic. Their 2 day course was taken by 6 groups of doctors, totalling 52 GPs and 92 general practice trainees. It produced both knowledge gains and high levels of confidence in most of the procedures performed in the course, with joint injection being rated by 36% as the most important aspect of the course. But a follow up evaluation after 9 months (Rink and Tissier, 1996) showed that confidence in joint injections had fallen (with the exception of the shoulder joint), while confidence in other procedures had remained high. This could have been due to lack of practice, because only a third of the attending trainees were now in practices offering a minor surgery service. The authors suggest that the course was ill-timed for those trainees who could not practice the procedures soon afterwards. The difficulty of setting up control groups was also confirmed when it emerged that 40% of their control group (not randomised) reported having had some minor surgery training elsewhere. The programme itself was also improved in the light of feedback, so the intervention did not remain unchanged, another laudable practice which interferes with the aim of the research!

An Australian study (Girgis et al., 1995) evaluated a skin cancer training programme for GPs of comparable length (3 sessions of 3-4 hours). The first session was an illustrated lecture/discussion on the epidemiology of melanoma, different forms of skin cancer and management options. The second involved a visit to a melanoma unit to accompany a specialist surgeon in reviewing new patients and follow-up examinations; and to familiarise themselves with health promotion literature and screening protocols. The third was conducted in a private outpatient clinic and focused on both diagnoses and Surgical techniques for excising skin lesions. Both experimental and control groups already had high ratings for ``adequacy of excision'' (a needs assessment problem for the providers?); and both increased the number of excisions performed by a similar amount (possibly caused by the attention effect of the pre-test serving as a reminder). The percentage of patients diagnosed rose significantly for the intervention group; and the diagnosis of slides improved from 53% to 67%. But the accuracy of this diagnosis, as judged by pathology reports remained at 60%. The authors conclude that changes in knowledge resulted from the CME course but were not translated into changes in practice. Though this is arguable for diagnosis and excision, it certainly applies to the screening rates, which also remained unchanged. Perhaps the treatment agenda outshone the prevention agenda - a common problem, even with GPs?

Carney et al. (1995) conducted a randomised control trial to asses the effects of different educational techniques on the cancer control skills of 57 physicians. Methods used included interactive small-group discussion, role playing, videotaped clinical encounters, lecture presentations and trigger tapes. Performance was measured by using unannounced standardised patients (see chapter 8) with hidden microphones to visit one year after the programme. Significantly higher performance was found for breast cancer risk-factor determination and smoking cessation counselling: these were the areas where the CME programme had used techniques that rehearsed or portrayed and discussed clinical activities.

Another 10 hour training programme concerned the assessment and management of depression (Gask et al., 1998). This programme, designed for the Defeat Depression Campaign, included both specially developed video material and course material for follow-on workshops. All five sessions had 1 hour of presentation, viewing and discussion, followed by 1 hour of role-play consultations which were videotaped and discussed. Assessment before and after the course comprised (1) a consultation with an actor role-playing a depressed patient, (2) a semi-structured interview and (3) a Depression Attitude Questionnaire. Although there were changes in interviewing style and the doctors gained in confidence, there were no changes in the two key measures of systematic assessment. The actor-patient ratings indicated improved explanations of depression and better regulation of its management. There was no change in the use of cognitive intervention, which the trainers had observed as causing confusion during the training. Therefore the package was revised to give greater emphasis to assessment and to reduce the time spent on cognitive intervention. This change could be interpreted as an implicit recognition that the course was too ambitious for the time available; but this was not overtly discussed, so it reads more like a ``common sense'' modification. None of the other evaluations reviewed discussed the feasibility of achieving all the objectives in the allocated time, nor whether certain more important objectives needed to be accorded greater priority. Indeed Davis et al. (1994) reported that ``no studies were found comparing outcomes by varying the course duration''.

Reports of courses in communication skills show greater awareness of the time dimension. Evans et al. (1987) described a short programme of two 3 hour sessions on general practice consultations, without including consultation practice, as a refresher course aimed only at updating research knowledge and changing attitudes. Their reported outcome of an increase in patient satisfaction is best interpreted as converting competence into performance, a less glamorous but equally important role for CME. An important pointer to current practice among, for example, GP trainer groups was Verby et al.'s (1979) study of a group of experienced GPs who met regularly to view videotapes of each other's consultations. Independent judges looking at their tapes before and after this groupwork found improvements in more than a half of the rated skills, although these higher scores were obtained at the expense of longer consultations. In the US a more focused course of two 4 hour sessions concentrated on detecting and responding to patients' emotional distress (Roter et al., 1995). These sessions included practice with simulated patients and reviews of 5 minutes sequences from audiotapes. Two experimental groups and a control group were used, one experimental group being taught a set of eight skills based on a Rogerian model and the other eight skills based on a cognitive approach to problem definition (PD) developed by Lesser (1985). Audiotape evidence indicated changes in practice involving the incorporation of taught skills, and this was confirmed by visits by simulated patients. Real patients of the ``trained'' groups of physicians showed greater reduction in emotional distress for as long as 6 months after their visits (post-training) although the duration of these visits had not changed. ``Trained'' physicians recognised more psychological problems, used more strategies to manage emotional problems in their patients, and showed greater clinical proficiency in the management of a simulated patient, than physicians in the control group. Moreover, patients in the PD group showed greater improvement than those in the Rogerian group. It should be noted, however, that for the even more taxing goal of developing the counselling skills of doctors and nurses in cancer care Maguire and Faulkner (1988) have found that a minimum of 3 to 4 days is needed.

6.2 Other Intervention Strategies

Educational materials are another major strategy for CME, but results for interventions comprising materials on their own are not encouraging. Davis et al. (1995) review reported positive outcomes in only 4 of 10 RCTs but there are a small number of positive examples of materials affecting prescribing practices (Meyer et al., 1991; Harvey et al., 1986). A recent study by McDougal et al. (1998) showed that providing 210 urologists with 67 monographs over a 2 year period produced a significant improvement in scores on a knowledge test, which correlated only with the number of monographs read and years post-training. But the improvement was modest and the authors doubted if it had any clinical significance. Given the limited time available for journal reading and studies reporting low levels of journal reading among some groups of doctors (Gordon, 1984; Williamson et al., 1989), the focus is now shifting to the use of medical information systems as the first point of access to published knowledge.

The impact of practice guidelines has been relatively well researched. Grimshaw and Russell's (1993) review covered 59 evaluations that met their criteria for scientific rigour: these were not confined to randomised trials, because for guidelines in particular ``there is a danger that treatment offered to the control patients will be contaminated by doctors' knowledge of the guidelines'' (page 1317). They found that all but 4 studies detected significant improvements in the process of care after the introduction of guidelines. Moreover, 11 of the studies also assessed the outcomes of care, 9 of which reported significant improvements. Davis et al. (1994) appear to have reached a different conclusion when they state that ``the evidence for their (guidelines) effectiveness on changing physicians on patient outcomes by themselves is weak'' (page 254). However, Grimshaw and Russell's tables include a column headed Intervention (dissemination or implementation) which indicates that in almost every study the circulation of guidelines had been accompanied by concomitant activities such as reminders, feedback or conferences (see below). The conclusion to be drawn (and because of their different criteria their sample is much larger than that of Davis et al., 1994) is that when a set of guidelines is considered sufficiently important for its impact to be formally evaluated, it will almost always be accompanied by other activities.

The notion that guidelines are sufficiently similar to be treated as a single type intervention is somewhat naive. Not only do guidelines differ greatly in their format and intent, but in their clinical credibility; and the contexts for which they are designed may vary from those where potential users are impatiently awaiting their arrival to those where attracting the receivers' attention may be difficult. Grol et al.'s (1998) study of Dutch GPs' use of a series of official guidelines, developed by working parties of experienced practitioners and specialists, focused on the relative importance of various attributes of the guidelines themselves. Ten guidelines and 47 recommendations from them were selected for the study; then a volunteer group of 62 GPs recorded their decisions after each consultation for which one of the 10 guidelines was applicable. Table 6.8 below shows the percentage of compliance with the guidelines when each selected attribute was present and when it was not present; and the ``strength of influence'' factor was calculated from this data.


Table 6.8: Influence of guideline attributes on compliance
Attributes % compliance when Strength
  present not of
    present Influence
Controversial, not compatible with other values 35 38 0.26
Vague and not specific 36 67 0.24
Described concretely and precisely 67 39 0.23
Demands changing existing routines 44 67 0.20
Based on scientific evidence 71 57 0.13
Consequences for management 50 65 0.13
Demands new knowledge and skills 54 65 0.10
Will provoke negative reactions in patients 47 63 0.10


Another variable which affects the use of guidelines is involvement in producing them. Carney and Helliwell (1995) describe an initiative involving 12 general practices in Northumberland to improve the care of patients with diabetes. This involved doctors and nurses learning together, discussion of six practice audits, remediation of identified knowledge deficits and the collaborative development of protocols. Evaluation was based on practice records for 1986 (prior to the initiative) and 1991 (two years after the setting of standards). They reported that:

``More patients received general practice care only or shared care in 1991 than in 1986. There was a reduction in the use of oral hypoglycaemic agents among non-insulin dependent diabetic patients and more patients were maintained on diet alone. A greater proportion of patients were referred to dieticians, ophthalmologists and chiropodists in 1991 than 1986, and there was increased recording of, examination of, and identification of, diabetic complications. Little change was found in the mean values for clinical parameters between the two years.''(page 149)

A study of 92 GP trainers in the North of England (North of England Study of Standards and Performance in General Practice, 1992) found that clinical standards for common childhood conditions improved prescribing and follow-up for those GPs who helped to set up standards for that particular condition, but not for those who were not involved in setting those standards: the experimental design was well chosen for the issue, because each sub-group of GPs was involved in setting standards for a different condition, thus demonstrating that involvement in standard setting only resulted in positive outcomes relating to those specific standards

A London-based study disseminated locally developed guidelines on asthma and diabetes through practice-based education (Feder et al., 1995). Two groups of 12 practices received guidelines and educational visits for either asthma or diabetes, thus acting as a control for the other condition. The intervention involved three lunchtime sessions: (1) introduction to guidelines and discussion of how practice management could be developed into a practice protocol with an emphasis on patient recall for annual review, a stamp for reviewing patients was given as a prompt, and home monitoring was also discussed; (2) review of the practice's organisational decisions, session on clinical content of guidelines, technical demonstration; (3) audit data from patient notes and further review. Practices receiving diabetes guidelines improved recording on all seven variables, those receiving asthma guidelines improved only on review of inhaler technique and prescribing. When the ``prompt'' was used, there was significant improvement in the recording of both conditions.

This London study involved both an educational intervention (educational visits) and an administrative intervention (the prompt), as recommended by Grimshaw and Russell (1993). Analysis of Grimshaw and Russell's tables reveals a predominance of administrative intervention with preventive care guidelines (63% reminders and 26% changes in patient records) and a slight predominance of educational intervention with clinical care guidelines (33% feedback, 29% conferences or seminars, 21% reminders, 12% changes in records). Based on their analysis of the effect size, the authors suggest that circulation of guidelines should be accompanied by a locally-based educational intervention and patient-specific reminders at the time of consultation. Davis et al.'s (1995) review found the outcome evidence to be particularly good for reminders (significant effects in 22 out of 26 RCTs). Given our earlier discussion (Chapter 2) about the relationship between competence and performance, it should be noted that the educational intervention is normally aimed at developing competence by discussion of rationales, evidence and problematic or typical cases, while the administrative intervention is aimed at converting that competence into improved performance.

Davis et al. (1994) describe educational interventions involving visits to practices as ``academic detailing'', and report the use of pharmacists and nurses in this role as well as physicians. When the overt reason for such visits is administrative (though these will almost certainly be accompanied by educationally useful discussion) the term ``facilitator'' is often used. For example, the use of facilitators who visit practices to talk to practice nurses and improve office systems has proved effective in improving health promotion activities in Oxford for monitoring risks of arterial disease and in the US for early detection and prevention of cancer. Both studies involved control groups. In Oxford there were dramatic increases in weight and blood pressure measurement and identification of smokers (Fullard et al., 1987); and a three year follow up confirmed continuing attention to identified hypotensive patients whose number fell from 16% to 8%, but much less impact on smoking and weight (Mant et al., 1989). However, Ebrahim and Smith's (1997) systematic review of interventions for preventing coronary heart disease concluded that multiple interventions only have a significant effect on patient outcomes for high risk hypertensive populations. The New Hampshire study incorporated two interventions -- a facilitated visit and an educational event for the physicians. There were significant improvements in mammography, breast examination, stool occult blood, `reduce fat' recommendations, and `quit advice' to smokers; and the facilitator intervention was both more successful and sufficient on its own (Dietrich et al., 1992). Similarly, a control study in Scotland by Bryce et al. (1995) demonstrated a significant impact for an audit facilitator on 12 representative general practices' diagnosis and treatment of childhood asthma.

An important aspect of many of these combined interventions is that they are trying to ensure not just competence but consistently good performance by addressing primary health care systems as well as individuals; and also by treating them as multi-professional contexts. Their goal is to expand individual competence into team performance.

Davis et al. (1994) conclude their review of the effectiveness of CME interventions by emphasising the intensity and complexity of interventions with positive outcomes and the multi-faceted nature of the change process. Learning in formal contexts requires (1) problem-based approaches to relevant clinical issues using authentic visual material, (2) formats for facilitating transfer such as small-group case discussions, peer review exercises in clinical settings, and especially role-playing or practice-rehearsal strategies which provide an opportunity to practice new skills and receive feedback, (3) incorporation of practice-useful devices to enable, remind or reinforce clinical physician behaviours; and (4) long term follow-up feedback on practice performance. Even from a purely technical perspective, which regards needs assessment and the design of appropriate CME provision as unproblematic, much CME practice emerges as based on wishful thinking by both providers and participants.

6.3 Needs Assessment and Audit

Davis et al. (1995) identified five levels of needs analysis in the 99 trials of CME interventions (160) they reviewed; and these were associated with varying percentages with a positive outcome.


Table 6.9: Levels of needs analysis.
Level of needs analysis reported Number of % with positive
  interventions outcome
No clinical need reported 12 42
Identification of general clinical area requiring    
change, with clinical care references. 34 53
Based on nationally approved guidelines 41 61
Consensus agreement among local health    
practitioners 45 58
Gap analysis or targeted barriers to change 28 89


These findings suggest that needs analysis and clear targeting are important requisites for effective formal CME interventions; and hence that links with audit might be advantageous. But this does not imply that audit on its own will be effective. Indeed Davis et al.'s review concluded that even audit with feedback was a relatively ineffective form of intervention (positive outcomes in 10 out of 24 RCTs); though it was more likely to be successful when the feedback was delivered in the form of a chart review.

The primary purpose of audit is quality assurance and improvement and it is normally applied at the team, department or site level. Derry et al. (1991) describe it as a multistage process involving six steps:

  1. Selecting a topic;
  2. Establishing target standards or criteria against which a level of performance can be measured;
  3. Observing practice by collecting, analysing, and presenting data;
  4. Comparing performance with targets;
  5. Implementing required changes through discussions, written policies or other mechanisms; and
  6. Repeating the review to check that changes have been implemented and that quality of care has been enhanced.

In principle the process is iterative, and the cycle can be entered at any point. Derry et al.'s (1991) ``audit of audits'' suggested that one reason for their lack of impact maybe the pressure to demonstrate to the authorities that audit is being done. They found that Stages 2 and 5 were frequently omitted from general practice audits.

An interesting example from the hospital context is a study of the Royal College of Radiologists' guidelines on referral practice (Royal College of Radiologists Working Party, 1992). Their Working Party piloted these guidelines in six centres and found them acceptable to local doctors, who also agreed to monitoring and review of their practice in relation to these standards by a local committee of peers and colleagues. However, most of the subsequent reduction in referral rates did not come from firms with a ``high referral'' record; and local committees were only prepared to intervene to encourage compliance by ``high referral'' firms in a quarter of the cases. Stages 1-4 of the audit cycle were implemented but in 74% of instances of apparently resistant high referral there was no Stage 5.

The most common method used in Stage 3 is a systematic review of medical records. Sometimes it is the only method used, giving rise to occasional confusion between the North American term ``chart review'' and the audit process in general. Chart reviews are used in teaching (See Chapter 4) as well as for audit; and their limitations are discussed by Tugwell and Dok (1985) who advise that they should rarely be used as the only source of evidence.

Lockyer and Harrison (1994) review the uses of chart review under the headings of self-assessment, assessing physician competence (see Chapter 8), community based chart audit using comparative data and assessing the adoption of guidelines. They then identify three ways in which chart review can be used in conjunction with CME:

  1. results of deficiencies from the review of physicians' charts can be used by those physicians to plan a group-oriented CME activity such as a lecture or workshop;
  2. results of screening or peer review of charts can be used to provide individualised feedback and consultation to a physician; or
  3. results of a variety of different types of chart review (particularly in hospitals) can be communicated to physicians responsible for designing CME activities. These CME planners integrate this diverse information into the selection of CME topics and the planning of CME activities.

The first method is the least used for two reasons. Chart reviews are rarely undertaken with formal CME activities as an expected outcome; and where deficiencies in physician performance are identified, a formal CME programme is only rarely the preferred remediation virtually none of the 185 medical audits reviewed by Sandlow et al. (1981) were judged to need a CME response.

The second method, a form of audit with feedback (see above), is being increasingly used. However, Mugford et al.'s (1991) review of 36 studies of the use of statistical information from audit or practice reviews suggest that it is most likely to affect practice when the recipients have already agreed to review that practice. Private mediation of information about performance is frequently reported or advocated, but not researched. In principle, chart review is also a good point to enter the audit cycle, but without a strong disposition to follow up the results the cycle never properly gets under way. Not surprisingly truncated versions of `audit' undermine the whole concept.

The third method is also being used more often, but it depends on CME being well funded and organised. Cantillon and Jones's (1999) review of CME in general practice found 18 evaluations of audits with educational interventions, of which 17 showed a positive influence on doctor behaviour, but only one included data showing the behaviour change was sustained (Pringle, 1998). Pringle's study is also of interest because the practice review was triggered not by a conventional audit of cohorts of patients but by a significant event focusing attention on a problem already recognised but not until then being given such high priority.

6.4 Self Directed Learning

Having explored the limits of formally organised CME interventions, we return to the normal ongoing pattern of mainly informal learning described at the beginning of this chapter. Jennett et al. (1994) distinguish three forms of physicians self-directed learning (SDL):

  1. informal, ongoing, habitual activities directed to the maintenance of competence;
  2. semi-structured learning experiences that typically have their basis in immediate patient problems; and
  3. formal, intentional, planned activities.

Informal self-directed learning is considered by many doctors to be part of their daily routine. They involve journal reading, ad hoc conversations, interactions with drug or equipment company representatives, possible attendance at regular events like departmental or practice conferences. Their purpose is perhaps best construed as self-initiated scanning of the doctor's practice environment with no specific outcomes in mind. Semi-structured SDL is linked to immediate patient problems. Its focus is decision-oriented, finding the best way to manage a patient; and learning is incidental to that prime purpose. It may take the form of consultation with immediate colleagues or experts with whom they have some contact, or of literature searches and reading. This type of learning is closely tied to individual patients and their progress and therefore cannot be planned. Formal self-directed learning corresponds to what Tough (1971) called a ``learning project'', in which there is a clear intention to learn about a specific problem or issue. There is a fairly clear sense of what needs to be learned, though detailed outcomes may be emergent rather than pre-planned (Gear et al., 1994).

The first large-scale investigation to explore the reasons for physicians making changes in their practice and the learning entailed was Fox et al.'s (1989) interview study of 340 physicians in the US and Canada. They found that, although some reasons for change were personal or social, most changes were driven by the desire to be more competent in the delivery of healthcare to patients. Important considerations early on were: (1) developing an image of the outcome of changing, because if this was clear the process of change was more rapid and efficient; and (2) assessing what capability they needed for making the change, which partly depended on the level of excellence they hoped to reach. Planned efforts to learn were commoner when the capability gap was fairly large, and the nature of that gap determined how they decided to learn. Their main sources for learning were colleagues, reading and CME programmes; and usually they used all three sources in various combinations (Fox et al., 1994) The implications for CME are the need: (1) to create opportunities for social interaction so that doctors can develop and sustain networks of contacts; (2) to develop medical informatics to facilitate literature searches: and (3) to ensure that CME events provide what self-directed learners need, as and when they need it (Harden and Laidlaw, 1992; Al-Shehri et al., 1993; Leclere et al., 1990).

Slotnick et al.'s (1997) survey of 118 doctors in the Dakotas and Minneapolis found that two thirds of their ``learning episodes'' were patient-centred and one third were concerned with gaining new skills and knowledge, thus distinguishing between learning triggered by the problems of a particular patient and learning to extend one's expertise for the benefit of future patients. The former fits Jennett's description of semi-structured SDL, but only the latter could be described as a learning project for which some planned learning, possibly including formal CME, might be needed. Both depend on the doctor's prior awareness of new (to them) areas of knowledge through ongoing informal learning by scanning the practice environment and familiar knowledge sources.

Earlier Geertsma et al. (1982) and Putnam and Campbell (1989) had outlined three stages in physician learning: (1) doctors deciding whether to take on a learning task to address a problem, (2) learning the skill and knowledge anticipated to resolve the problem, and (3) gaining experience in using what was learned. These stages were confirmed by Slotnick (1999) using in-depth interviews with 32 doctors; though he found it necessary to add a further stage to take into account the requisite prior awareness acquired by scanning the practice environment ( Jennett et al.'s informal SDL), and to allow for possible termination of a learning episode. This four stage theory of doctors' self-directed learning is summarised in Table 6.10 below.

Table 6.10: Slotnick's 4-stage theory of physicians' self-directed learning episodes
Problem Type Specific (e.g., addressing a particular patient's need Bodies of Skill and knowledge
Learning Stage
Stage 0: Scanning for problems and other interesting things The doctor is aware that problems are ``out there''. The doctor is alert for problems which she might need to solve and, when potential problems are encountered, she moves on to the next stage.
Stage 1: Deciding whether the potential problem encountered is worth pursuing. The doctor senses a need for immediate action and decides on the spot whether to take on the problem; alternatively, the doctor reads a bit, talks with others and nevertheless decides quickly The doctor feels uneasy (``Maybe I should review that stuff ...'') and asks: Is this really a problem? Is there likely a solution to the problem? Are resources available so I can do the required learning? Am I prepared to make the changes in my practice required by the learning I do?
Stage 2: Learning what is needed to address the problem. Learning involves reading (typically journals, less often texts) and talking with others who offer suggestions. Learning involves comprehensive reading and taking available and appropriate courses.
Stage 3: Gaining experience using what's been learned. Learning at this point means trying the problem solutions on the problem in question and seeing what happens. Learning means trying the new skills and knowledge in a range of settings and gaining experience as a result. It also means reading again though now its purpose is not to learn the new thing but to see what kinds of experiences others have had with it.


An important aspect of the model is that the nature of the learning is significantly different for each type of episode. A more elaborate version of the model (Table 6.11) examines each stage in further detail, addressing five aspects in turn: the doctor's overall goal, the discrepancy between that goal and the current situation, the learning resources likely to be used, the nature of the doctor's reflective thinking and the criteria for completing the episode. The model illustrates the complexity of both patient-centred and knowledge/skill centred episodes, and the character and purpose of the thinking at each stage.


Table 6.11: Attributes at each stage in a learning episode.
Stage Stage 0
Attribute ``Scanning'' Stage 1 Stage 2 Stage 3
Goal Identify potential problems to consider during next stage AND note issues potentially useful at some later point. Decide whether the doctor should learn what is necessary to resolve the precipitating problem. Learn the knowledge and skill necessary to begin resolving the precipitating problem. Apply and become comfortable with what has been learned in resolving the precipitating problem.
Discrepancy Doctor needs problems whose solutions will satisfy Maslowian needs. The doctor lacks sufficient information to decide whether to pursue the problem's solution. The doctor lacks the skills and knowledge necessary to begin resolving the precipitating problem. The doctor lacks experience and/or confidence in what he is doing.
Learning Resources All aspects of practice and daily life. Specific Problem: The clinical situation, reading, discussion with other doctors. New Body of Skill and Knowledge: Reading, conversations, information at meetings. For specific problems, primary sources are reading and consultation; for new skill and knowledge, they are reading, consultation and courses. Primary sources are those used already and experience using the skills and knowledge learned. Doctors also seek others' experiences in similar situations.
Reflection Focus on probable fit with doctor's life generally, practice in particular. Relationship of problems, information, and issues to practice is a central feature. Specific Problem: Focus on patient, available reading and consultation; context varies from clinical and immediate to consultative and deliberate; purpose is to address the questions under discrepancy. New Body of Skill and Knowledge: Focus on information on skills and knowledge to be learned; context is consultative, deliberative, and occur anywhere; purpose is to answer discrepancy questions above Specific Problem: Focus is knowledge and skill needed to resolve the problem; context is typically clinical and immediate; purpose is to learn procedures for addressing the problem. New Body of Skill and Knowledge: Focus is reading, context is deliberative and/or hands-on; purpose is to gain sufficient knowledge and skill to begin using the new learnings in resolving the precipitating problem. Post mortems of what happened occurred in both specific problem and general body of skill and knowledge situations. The focus was on the doctor's experience as well as prior knowledge and experiences, the context is deliberative and may or may not be at the site of the action, and the purpose(s) to evaluate and gain experience. The experiences of others (both personal and published) is reflected upon as well.
Criteria for Completion Problem, issue, or information seems interesting or important enough to be considered further. Answers realized to: is there really a problem? Is there likely a solution to the problem? Are resources available for the physician to do the learn what is required to solve the problem? And is it practical for the doctor to do the learning? Situation-specific indications: The problem requires action, resources were exhausted, others (e.g. instructors) told the physician it was time, there was nothing more to study; doctor specific indications: An acceptable plan existed, the doctor felt ready; the doctor was clear on what was to happen next; the doctor felt there was no value in additional learning. All criteria were situation-specific in the sense that the doctor gained enough experience to be confident with the new learnings. This was evidenced by the doctor's attention shifting to other issues. The stage could also end because the precipitating problem resolved and the doctor lacked further interest.


A less systematic picture emerges from a British study by Armstrong et al. (1996) of general practitioners' reasons for changing their prescribing behaviour. This identifies a preliminary awareness of new possibilities from reading and brief discussions with other doctors, which may lead to action if it matches the doctor's preconceptions, comes from a highly credible source or is triggered by encounters with other doctors' prescribing practices or other critical incidents. But it also reports a rather precarious process of trying out new drugs, which appears to be highly dependent on the responses of a small number of early `pilots'.

Slotnick's (1999) framework does not give early experience with a change quite such a precarious feel, but it was derived from a different population of doctors in a different context with greater exposure to evidence-based medicine; and was not specifically focused on prescribing. Moreover it offers a realistic and comprehensible approach to lifelong learning which could be discussed in detail with postgraduate trainees. They will need more than exhortation to help turn good intentions into workable patterns of Continuing Professional Development.

6.5 Continuing Professional Development

Most professions have now adopted the term Continuing Professional Development (CPD) as encompassing a wider range of learning experiences than those associated with the term Continuing Education, which still carries associations of more formal, provider-initiated, educational activities. Thus CPD comprises informal learning as well as formal learning, learning on-the-job as well as learning off-the-job, the full range of learning activities described in Table 6.11 above. Eraut et al. (1998b) suggest that CPD be defined as ``all the further learning which contributes to how a qualified professional thinks and acts at work''. They list its main purposes as:

While some of these goals might properly be regarded as only the concern of individual professionals, most of them also affect the performance of health care organisations and the relationship between doctors and the public. The central problem of CPD policy is the reconciliation of four incontrovertible factors:

  1. It is individuals who learn: their motivation, access to and use of learning opportunities and time for learning are vital.
  2. Nevertheless social expectations affect their learning, learning is triggered by social events, especially encounters with patients' problems; much learning is from other people in a range of social contexts.
  3. Health care organisations are both necessary for the provision of multi-disciplinary health care teams and support services, and legally liable for the quality of care
  4. There is always strong public concern about the quality of health care, which therefore remains high on the political agenda.

The system adopted by many professional organisations of requiring an annual minimum quota of approved CPD activity has been used for General Practitioners, with the additional complication of linkage to a Postgraduate Education Allowance (PGEA). However, the PGEA scheme has been criticised for similar reasons to quota schemes in other professions:

Although such schemes have helped to build networks of professional contacts (important for informal learning) and to initiate a cultural change towards regarding lifelong learning as an integral part of all professional work, they have lacked a direct link to needs analyses based on professional performance. Hence they cannot be regarded as meeting all the demands for quality assurance. The current Australian Quality Assurance and Continuing Education Programme for general practitioners is of particular interest (Salisbury, 1997). This is based on a 3 year quota of credit points, upon which continuing registration depends. In addition to formal CME and informal learning, there is a requirement for practice assessment activities based upon an audit cycle.

An alternative to the quota system is the personal development plan (PDP). The arguments for PDPs are that they are needs-based rather than provision-based and impose no restrictions on the type of CPD activity. However, they also lack verification of outcomes and assurance that priority needs will be addressed, unless they are tightly coupled to audit or performance review. From another perspective, the PDP approach is often criticised for being too individually based. Owen et al. (1989), for example, found that 58% of practitioners thought that self-learning activities on their own were an inadequate strategy.

One response to this problem of isolation has been to introduce a mentoring system. For example, the three-part strategy piloted by Challis et al. (1997) for GPs in Sheffield comprised: the development of a personal education plan; creating a portfolio to document its progress and gain accreditation for the learning; and mutual support through a co-mentoring group, initially facilitated by a CME tutor. Other initiatives involving GPs' use of personal plans with support from a GP tutor or CME advisor are reported by Bahrami (1998) and Valentine and French (1998). An East Anglian scheme which offered GPs a choice of an experienced GP mentor or co-tutoring in which pairs of GPs support each other's learning has been evaluated by Hibble and Berrington (1998). Both systems were positively received, and 52% of the contracting group used PDPs compared with 27% of the mentored group: an additional finding was a significant reduction in stress levels in both groups.

Another response has been to introduce activities for groups of physicians who work together and for multi-professional health care team, alongside doctors learning on their own (Cunningham, 1995). Recent research on learning in the workplace suggests that learning within workplace groups is very important but also highly dependent on the microclimate of that particular workplace (Eraut et al 1998). Both appropriate management training and contact with other groups can help in the process of improving the learning climate (Burton, 1998).

The American researchers Fox and Bennett (1998) are also now advocating a co-ordinated approach to all three levels, suggesting that the role of CME providers should be to:

The developments reviewed above are recognised in the Department of Health's 1998 report, A Review of Continuing Professional Development and General Practice, whose principal recommendation is

``to integrate and improve the education process through the Practice Professional Development Plan (PPDP), developing the concept of the ``whole practice'' as a human resource for health care, resembling the health promotion plan in general practice and increasing involvement in the quality development of practices.'' (page 3)

The report anticipates the gradual fading out of the PGEA and argues that future CPD systems should ``recognise and reward the process of need assessment, CPD planning and outcomes assessment'' (page 14). It notes that further research will be needed to ascertain how best to achieve this goal, and to link CPD with audit and R&D. Significantly it argues that:

``the main message in delivering effective CPD is that the key to lifelong learning lies not in how to learn, but in how the learning process is managed.'' (page 13)

6.6 Summary

The relevant research into Continuing Medical Education and Lifelong Learning falls into three main categories: research into how doctors learn, evaluation of CME interventions and research into innovation strategies using single or multiple interventions to achieve changes in specifically targeted areas of practice.

Surveys of GPs, and also in a few cases consultants, have shown the importance for learning and changes in practice of a wide range of learning activities and sources of information. Moreover, they differ according to whether the changes involve treatment (including prescription), diagnosis and investigation, doctor-patient relationships, referral policy, health promotion or practice organisation. Models of physician learning distinguish between learning triggered by the problems raised by current individual patients and ``learning projects'' to acquire or improve proficiency in a targeted area of practice. The initiation of learning is dependent on significant background knowledge of what is out there to be learned to which CME conversations with other physicians, and reading contribute in ways which would not be revealed, for example, by evaluations of CME events.

Evaluations of CME courses have demonstrated the importance of including activities such as the observation and discussion of visual material and/or supervised practical work. Though it has confirmed that short courses of 1 day or less are rarely effective, no controlled studies have been reported which used length of course as a variable. This deficiency needs to be remedied because much time could be wasted trying to improve courses which are too short; and unrealistic expectations of the learning time required for certain goals are easily developed by busy learners and under-resourced providers - a form of collusion from which nobody benefits. Another important conclusion is that educational interventions on their own often fail to achieve changes in practice.

Research on innovation strategies points to the danger of focusing only on the development of competence. Competence has to be translated into performance and at this stage many dispositional and organisational factors come into play. Research on the implementation of guidelines, for example, indicates not only that the quality and utility of the guidelines themselves is important but also that both educational interventions (leading to understanding of their purpose and rationale) and administrative interventions (ranging from organisational changes to simple reminders) need to accompany the guidelines.
continued over page

Summary -- continued

The discussion of recent developments in CPD reaches two conclusions. First, needs analysis is important for quality assurance purposes at three levels - the individual, the working group and the healthcare organisation (the last two are multi-professional). However, it should not be assumed that needs identified by audit, for example, will necessarily require an educational response. Second, following the advice of Fox and Bennett (1998), CME providers should adopt a coordinated approach to all three levels by facilitating self-directed learning, providing high quality individual and group education, and assisting healthcare organisations to develop and practise organisational learning.

---------------------------------------------------------

previous up contents next
Left: 5. Learning in Non-Clinical Up: Developing the Attributes of Right: 7. The Role of
Benedict du Boulay, DOH Report pages updated on Friday 9 February 2001