This chapter examines the implications for policy of the literature reviewed in this report. Its arguments are more personal because making links between research and policy in educational and social settings often involves extrapolating the conclusions of research beyond the context in which they were obtained; and this entails a change in the status of one's comments from proven conclusions to potentially useful insights. Since there is a substantial amount of North American research in areas relatively unexplored by the small number of British researchers, it is important to investigate the implications for British policy and practice without assuming that the transfer of findings across the Atlantic can be taken for granted. A brief introduction to the US system of residency training after graduation from medical school is provided by Salter (1995).
Although we have organised this discussion on policy implications chapter by chapter for ease of reference back to the relevant research, we have not stuck to this format rigidly but have introduced (with an appropriate cross-reference) material from other chapters when it improved coherence and avoided two separate discussions of essentially the same issue. A recurring conclusion throughout this chapter is the need for further research. Descriptive data about the processes of British postgraduate medical education is sparse and outcomes evidence, other than pass rates for the Royal Colleges' examinations, is very rare indeed. More attention has been given to research into undergraduate education which is generally more accessible to researchers. The overwhelming busyness of the postgraduate experience is not only a constraint on the pursuit of its educational goals but also a deterrent to research.
Postgraduate trainees are employed as working health professionals, so their current competence is always an issue, as well as the competence they will be expected to demonstrate when their specialist training has been completed. Defining competence in terms of the expectations of the holder of a particular post helps to avoid confusion; and it should be possible to make periodic updates of a doctor's progress beyond the minimum competence for the post to enable responsibilities to be extended to match their developing competence without putting patients at risk. Many log-books seek to achieve this but their usage needs to be evaluated. The central problem of postgraduate education is how best to combine work within a doctor's current competence, itself a source of learning, with the provision and use of learning opportunities to extend that competence. Training programmes are designed with clinical experience as the prime consideration, but many factors can affect the use of that experience for learning purposes. These include time to think, timely help, and an ability to discern and pursue those learning goals that should have highest priority at that stage of the doctor's development. Self-directed learning requires a strong focus as well as learning opportunities. Hence there is a need for an agreed framework for the development of competence during each training programme, which puts learning outcomes on the priority list alongside service duties. While many experienced clinical teachers may have an implicit framework for progression, this is insufficient to provide the necessary coherence and continuity. The distributed nature of learning opportunities and support in most healthcare settings requires the use of an explicit framework for the development of competence. Moreover the process needs to be efficient as well as effective if more is to be learned in the limited time available for training.
To develop such progression frameworks in a manner that recognises the enormous complexity of the task will need further research into the specification and communication of the competence required at various stages of training, at the completion of training and for progress beyond that point. It then becomes possible for training programmes to indicate how they provide learning opportunities appropriate for the expected outcomes of a particular training period. The recent report of the US Federated Council for Internal Medicine Task Force, Graduate Education in Internal Medicine: a Resource Guide to Curriculum Development (Sox et al., 1997) is a useful source of ideas. Not only does it set out a detailed framework of competencies which covers the whole spectrum of the physician's role, but it also discusses the advantages and limitations of different learning settings and provides matrices for mapping those settings where particular competencies are intended to be developed. Clarification, at least at a broad level, of what and how a doctor is expected to learn over a particular period is necessary for auditing and ultimately for researching the educational process. Until this happens the relationship between potentially available learning opportunities, received learning experiences and learning outcomes will remain obscure.
Areas in which competence is seriously underconceptualised include communications, teamwork and management in healthcare settings. The blanket term ``communication skills'' is used to cover a wide variety of processes and settings with little attempt to differentiate between them, to take into account the situated nature of communication or to develop frameworks for progression. Given the increasing priority being given to client-centred practice and research evidence demonstrating the impact of good communication on health outcomes, much more detailed and professional attention to this area is overdue. Communication between doctors and with other professionals is also critical, though not so often recognised as an area where a focus is required on what has to be learned and how. Teamwork and other organisational factors also impact on patient outcomes; but there is little evidence on how (or even whether) these capabilities are developed during postgraduate education. It is also one aspect of the concern expressed about the potential gap between the competence required to be given a CCST and that required of a consultant.
The capability to convert the competence of a range of professional workers into a team performance that meets expectations has to be developed by good supervision and management. This requires sensitivity to organisational factors affecting other professionals' performance as well as one's own; and the disposition to seek changes where these are needed for health care improvement. All these interpersonal foundations need to be laid down early in professional life, a responsibility which has to be shared and coordinated among all those carrying educational roles.
The inclusion of judgement in our brief has had a liberating effect, because there is considerable debate about whether it is , or even can be, included within the term competent. Some perceive judgement as an attribute of personal expertise that goes beyond that competence which any fully trained doctor could be reliably expected to demonstrate. It can also be seen as a dimension of lifelong learning linked mainly to the improvement of decision-making through learning from experience over a long period, rather than the learning of new practices or keeping up to date with research. Judgement is associated with complexity and uncertainty; and people find it easier to cite examples than to define it. Probing examples to elicit the nature of the underpinning knowledge is difficult; and the development of judgement by doctors has been little researched. We would expect in-depth discussion of difficult cases to contribute to such development, providing there was a learning intention.
Theories of expertise developed in different contexts using different research techniques may emphasise different aspects but do not greatly differ in their conclusions. Key features include the importance of case-based experience, the rapid retrieval of information from memory attributable to its superior organisation, the development of standard patterns of reasoning and problem-solving, quick recognition of which approach to use and when, awareness of bias and fallibility; and the ability to track down, evaluate and use evidence from research and case-specific data. Understanding the nature of expertise is important for self-monitoring one's use of heuristics and possible bias, sharing knowledge with others and supporting other people's learning. It is also critical for understanding the respective roles of clinical experience and evidence-based guidelines. Those responsible for developing, disseminating, evaluating and modifying guidelines, decision aids, information systems and communications aids within teams and across teams need to match their procedures and modes of representation to the way doctors' minds work.
Research into decision-making under conditions of stress and uncertainty suggests that training in crisis management is needed, and that teamwork and other organisational factors are important. At the individual level there is a need to accept that this will always be a problem area and that non-cognitive factors are important, with confidence also being a critical aspect of performance. There is a need for regular self-evaluation to maintain critical control of one's practice.
Implications for teaching are the advantages of basing progression frameworks on case typicality and coordinating the use of exemplar cases and generalisable knowledge. The use of recorded material and standardised patients for developing competence in communication skills is strongly supported by research but the conclusion in Section 9.2 above was that there is still no evidence of any long term strategy for broadening and deepening this competence. The use of evidence-based medicine requires on-the-job as well as off-the-job teaching.
The literature on expertise emphasises that it is the structure of experts' knowledge as much as the quantity of their knowledge which defines their capability. Clinical reasoning is optimised in the expert's particular area of expertise. Such reasoning tends to be `schema' driven and differs from problem to problem (Norman et al., 1985b) rather than following some consistent, classical model of the hypothetico-deductive approach.
These findings have implications for training. For example, Regehr and Norman (1996) make the following suggestions (in relation to medical schools, though many of the issues apply post-registration):
A second implication is that medical students should be taught to make extensive use of decision-support systems, which are designed to be immune to decision biases.'' Regehr and Norman acknowledge that if the data collected and input to the system is subject to bias, then so will be the output. So they go on to say that the aim should be to help ``individuals decide when the decision-support system could be beneficial rather than teaching individuals to rely on it extensively''. (pages 999-1000)
An alternative to the decision-making aids approach is expressed by Grant (1989) who argues for the employment of a course that helps doctors reflect on their thinking processes via a ``large series of exercises which work with the participants' thinking processes as they are'' (our emphasis).
One possibility is simply to ensure that those being trained are exposed to a sufficient number and variety of cases to allow them to build up the appropriate schemas. Another is to provide cases plus ``instructional road maps'' to traverse them (Feltovich et al., 1992). Another approach, e.g. as advocated by Mandin et al. (1997), is to teach clinical problem solving using schemes. Of course, ``using a scheme'' is not the same as an expert ``having a schema''. Their approach (at Calgary) goes beyond simple ``problem-based learning'' by helping students develop specific schemes for each type of presentation (see also, Palchik et al., 1990) in the area of expertise -- as opposed to a more general methodology of generating and then refining multiple hypotheses. This view is disputed by Papa et al. (1996) who argue that some of ``case-specificity phenomena might be viewed in part as an artifact of an educational system containing widespread inconsistencies in the instruction or assessment of disease class-specific differential diagnostic diagnosis concepts.'' (page S12).
An obvious question to ask is whether instruction in evidence-based medicine leads to improvements in practice, and then whether those improvements to practice themselves lead to better patient outcomes. Norman and Shannon (1998) reviewed ten studies (from 1966 to 1995) that examined the effects of teaching critical appraisal skills. After excluding studies with methodological flaws, they analysed four studies of students and three of residents. They found that:
``...although instruction in critical appraisal (evidence-based) skills can result in sizeable gains in knowledge among students, the effect of such instruction is much smaller among residents. Furthermore, the minimal evidence to date does not, as yet, provide any indication that the gains in knowledge result in a change in behaviour with respect to the critical use of the literature.'' (Norman and Shannon, 1998, page 180)
In pondering the differences in outcome between students and residents, Norman and Shannon suggest that this may be to do with the different degrees of integration of the instruction into the educational program. For example, some of the students would have taken the course for credit, whereas the residents worked to a `journal-club' format. They suggest that more positive effects may be found when such programs are more strongly integrated into all aspects of training (though they point out that they found no evidence to back this optimism). They also concluded that they could (as yet) find no convincing evidence that the ``gains in knowledge demonstrated in undergraduate critical appraisal courses can be sustained into residency and practice and eventually translated into improved patient outcomes'' (page 181).
Basic science and clinical science can be usefully seen as parallel fields of knowledge which illuminate each other but do not necessarily determine each other. Opportunities for using scientific knowledge are often neglected, because people fail to recognise how much further learning is involved in transferring knowledge from an academic context to a clinical context. The selection and timing of science-based inputs to postgraduate education should be planned from a user perspective; and where appropriate the use of scientific knowledge be taught through case discussions in clinical settings.
There have been several evaluations of postgraduate basic training programmes in the UK. Though there have been a few improvements, the overall impression is still negative. Many features of the educational policy seem to be appropriate, but they are not being implemented in many hospitals. There is insufficient supervision and feedback. Educational goals are subordinated to service demands. While many house officers receive good clinical teaching, a minority do not and assurance of educational quality is weak. Learning goals are only specified at a very general level, so there is little clarity about priorities, especially at the PRHO stage.
Since service responsibilities contribute greatly to the development of competence, working and learning will often be indistinguishable activities. But they still signify different expectations of doctors in postgraduate education, and the tension between their respective priorities is constantly noted in both research studies and policy reviews. The educational dimension cannot easily be sustained by a laissez faire approach which allows problems in responding to today's patients to assume precedence over those of tomorrow's patients. This issue has to be tackled at local level where there is limited management of the educational process and clinical tutors have little time and no authority over clinical teaching. Deans do what they can; but quality assurance of postgraduate education lags well behind that for clinical practice; and the UK research base at this level is minuscule.
Another important issue which equally affects education and service goals is continuity of care (Irby, 1995). Lack of opportunity to follow patients over time can easily become a hidden weakness in junior doctors' experience within the hospital setting, in outpatient clinics and across the boundary between primary and secondary care. The same principle can be applied to the junior doctors themselves, for whom short rotations limit the development of relationships and weaken the scope for the kind of supervision which incorporates wider aspects of the professional role and facilitates a learner-centred approach.
Short clinical rotations require both teachers and learners to quickly determine how to work together for the care of patients. From a systems perspective, this loose connection between teachers and learners tends to inhibit close supervision, reduce targeted teaching, and limit thoughtful feedback. In addition, it increases the difficulty and complexity of the tasks of teaching and patient care. Mutual knowledge of the participants in the process (patient, learner, and preceptor) enhances the quality and efficiency of the interaction as well as the satisfaction of the participants. (Irby, 1995, page 906)
Continuing concern has been expressed about the survey evidence on basic surgical training. Sometimes the problem is too little supervision of operations by house officers, sometimes the house officers get insufficient clinical experience. Operating under supervision is seen as the most critical feature of learning to be a surgeon and there is not enough of it. Some authors recommend the greater use of `skills labs' and simulators. (See also Chapters 3 and 5).
The learning of procedures in medical posts has been criticised for being too haphazard: there is often little continuity of experience and guidance is often provided by doctors who are themselves not very experienced. More planning could enable more systematic teaching by more expert doctors; and a credentialling system like that used in many American hospitals would improve the quality assurance.
The appropriateness for GPs of so much general hospital training has been questioned. Though research on this issue would be difficult, we think more research evidence could and should be gathered. One year's training in general practice seems very short, especially since research in Holland resulted in them extending this period to two years. Not surprisingly research investigating particular areas of expertise has resulted in long lists of needs for GPs continuing medical education (see Chapter 6). In areas such as palliative care and psychiatry the argument seems particularly strong, in other areas one might look to other ways of distributing more specialist expertise within primary care organisations. GP training contrasts favourably with basic hospital training in its ability to provide tutorial support on a regular basis. Its quality varies no more than clinical teaching in other settings; and the commitment to quality improvement appears to be much greater.
Outside general practice, there is much more in-depth research in North America than in Britain. This has given particular attention in the last decade to learning in ambulatory care settings, a term which covers both family medicine and hospital clinics. More use is now being made of such settings in order to give doctors a broader experience of medicine, especially when significant aspects of care are being moved out of hospitals. The key issues emerging from North American research are:
Methods for finding time for trainer-trainee interaction cover both time created within clinics by patient scheduling and the use of clinic experience for later case discussion and chart review. The papers in this area include many useful practical suggestions as well as evaluations of practice. In general, learning in ambulatory settings was found to give rise to discussion of a wider range of medical conditions, and greater attention to the medical interview and to social issues.
Issues relating to feedback included the findings that:
Qualities of good teachers inferred from rating studies can be grouped under the headings of Physician Role Model, Effective Supervisor, Dynamic Teacher and Supportive Person. There is a great deal of material in Section 4.5 which ought to be introduced into the training of clinical teachers.
Learning in inpatient settings is also researched in greater depth in North America. One gets the impression that American residents receive considerably more clinical teaching than their British counterparts, but there is no British data to enable a proper comparison. The variation in the amount of training received by British trainees is reported as considerable, raising issues of quality assurance and trainee entitlement. The two major constraining factors on learning by US residents were insufficient time and opportunity to learn, and low faculty involvement and commitment. Innovations receiving strong positive evaluations included adaptations of the Morning Report system to incorporate the teaching of Evidence-Based Medicine; and case reviews of patients whose diagnosis had changed while in hospital or within 6 months of leaving hospital.
Research on instructional thinking and decision-making by highly-rated clinical teachers is highly relevant to the training of clinical teachers (Irby, 1994a). In addition to faculty development workshops on general teaching skills, case-based teaching is needed which can best be provided through departmental teaching improvement and mentoring skills.
Research into postgraduate teaching and learning in non-clinical settings mostly comprises evaluations of a wide variety of teaching innovations, rich in ideas but not necessarily generalisable. In particular we would draw attention to improving the learning benefits of departmental conferences, developments in self-directed learning (more prominent for CME), skill-based courses in surgery, the use of GPs to teach primary care to house officers in Accident and Emergency departments, a system for teaching clinical examination comprising both seminar and ward-based components, and confirmation that various types of ``lectures plus'' teaching are more effective than lectures only. Several departures from the standard lecture format have been positively evaluated, as have variations on case-based departmental seminars. Thus the strong evidence that the effectiveness of off-the-job teaching is highly dependent on its links with related on-the-job teaching makes it unwise to evaluate off-the-job teaching on its own. This limits the applicability of some of the standard research on lecturing to undergraduates, for which research reviews are readily available.
Over the last decade there has been a gradual shift in focus from the provider-centred concept of Continuing Medical Education (CME) to the learner-centred concept of Continuing Professional Development (CPD). The recent Chief Medical Officer's Review of Continuing Professional Development and General Practice (Calman, 1998) is an important indication of how government thinking has changed. The research evidence demonstrating that CME is only one of several contributors to physicians' learning and changes in their practice has been strong for some time, the other contributors being:
The relevant research falls into three main categories: research into how doctors learn, evaluation of CME interventions and research into innovation strategies using single or multiple interventions to achieve changes in specifically targeted areas of practice. All three of these interrelated areas of research have direct implications for practice.
Surveys of GPs, and also in a few cases consultants, have shown the importance for learning and changes in practice of a wide range of learning activities and sources of information. Moreover, they differ according to whether the changes involve treatment (including prescription), diagnosis and investigation, doctor-patient relationships, referral policy, health promotion or practice organisation. Models of physician learning distinguish between learning triggered by the problems raised by current individual patients and ``learning projects'' to acquire or improve proficiency in a targeted area of practice. The initiation of learning is dependent on significant background knowledge of what is out there to be learned to which CME conversations with other physicians, and reading contribute in ways which would not be revealed, for example, by evaluations of CME events. The importance of informal consultations with others and a reluctance to ``cold call'' experts suggest that facilitating social interaction among doctors and strengthening their networks should be a policy goal. These specifically medical models of physicians' learning, go into greater depth than general models of adult learning, though the latter are still confirmed by recent research. They enable more detailed discussions about professional learning, and especially lifelong learning, in which all physicians should now be prepared to participate.
Evaluations of CME courses have demonstrated the importance of including activities such as the observation and discussion of visual material and/or supervised practical work. Though it has confirmed that short courses of 1 day or less are rarely effective, no controlled studies have been reported which used length of course as a variable. This deficiency needs to be remedied because much time could be wasted trying to improve courses which are too short; and unrealistic expectations of the learning time required for certain goals are easily developed by busy learners and under-resourced providers -- a form of collusion from which nobody benefits. Another important conclusion is that educational interventions on their own often fail to achieve changes in practice.
Research on innovation strategies points to the danger of focusing only on the development of competence. Competence has to be translated into performance and at this stage many dispositional and organisational factors come into play. Research on the implementation of guidelines, for example, indicates not only that the quality and utility of the guidelines themselves is important but also that both educational interventions (leading to understanding of their purpose and rationale) and administrative interventions (ranging from organisational changes to simple reminders) need to accompany the guidelines.
The discussion of recent developments in CPD reaches two conclusions. First, needs analysis is important for quality assurance purposes at three levels - the individual, the working group and the healthcare organisation (the last two are multi-professional). However, it should not be assumed that needs identified by audit, for example, will necessarily require an educational response. Second, following the advice of Fox and Bennett (1998), CME providers should adopt a coordinated approach to all three levels by facilitating self-directed learning, providing high quality individual and group education, and assisting healthcare organisations to develop and practise organisational learning.
Researchers foresee an increasing delivery of ``processed data'', e.g. about treatment options, at the point of care delivery, and the increasing dependence of practitioners on validated and trusted databases rather than the primary (journal) literature. They anticipate a coming together of evidence-based medicine and clinical information technology to provide the practitioner with whatever information is needed at the time and place where it is needed. Of course this raises the issues of how the practitioner is to be trained to access and to judge the trustworthiness of such information and how systems are to be designed to make information available in an effective manner that properly meets doctors' needs.
Decision support systems have had a mixed reception over the years, but seem to be growing in acceptance as just another tool in the doctor's armoury. A training issue here is the need to ensure that doctors understand how decision support systems frame the problem so that they can judge the quality of the advice that they offer.
In terms of the development of such systems, we may expect the development of linked databases rather than fragmented sources and better explanations from systems about the reasons for their decision advice.
A great many systems have been developed for various aspects of training, but most come up against the the consistently hard problem of having the system monitor, evaluate and react sensibly to the learner's attempts to master a skill or solve a problem. The training implication of this is that such systems may be excellent, but their use needs to be carefully integrated into the overall training programme -- not least so that the human trainers provide what the computer-based system cannot provide in terms of monitoring and feedback.
Hitherto, most of the research effort has focussed on the assessment of competence linked to certification decisions rather than the assessment of performance on-the-job. This emphasis is gradually changing as public demand for robust quality assurance grows. Revalidation is about to be developed with very close attention to research on performance assessment. The summative assessment of GPs is more competence-based as candidates provide their own sample of videorecorded patient consultations: one might argue that a random sample might be more appropriate if the training period were longer. Summative assessment for the award of the Certificate of Completion of Specialist Training will be at least partly performance-based; and will need to be evaluated as it comes on line. Since Membership Examinations of the Royal Colleges are competence-based, the performance-based element at the end of Basic training is given relatively little attention. Thus the most critical certification issue arising from our review is the extent to which assessment regimes cover the full range of competence discussed in Chapter 3 and its translation into performance.
The other two purposes of assessment are (1) formative assessment to provide guidance to learners and/or those who supervise and support them, and (2) quality assurance and the improvement of practice. What research we have seen suggests that these issues deserve considerably more attention. With formative assessment questions have been raised about frequency, coverage, and reliability; and, if it is to properly serve its purpose, the manner in which formative assessment is integrated into training programmes to support the learning process will also need to be researched. Levels of supervision are often affected by factors other than the competence level of the trainee, and feedback may not be based on any systematic (though not necessarily formal) assessment.
The Canadian three-tier system for the monitoring and enhancement of physician performance is now well developed and familiar to those exploring revalidation in the UK. The need for good assessment practice linked with effective strategies for the improvement of practice (see Chapter 6) is critical, both for revalidation and for formative and summative assessments during postgraduate education.
The practice of medicine involves dealing with complexity and uncertainty, often under considerable time pressure. Risk is inevitable, but can sometimes be reduced by the possession of, or ready access to, high levels of experience and expertise. One of the reasons for assigning house officers and registrars to a firm of consultants or a primary health care practice is to provide such access. There is also a risk in medical education. At some point a trainee has to conduct a procedure or make a decision for the first time; and taking such responsibility plays an important part in learning. Trainee professionals frequently report learning most intensively when `on call' (Eraut et al., 1998a,1997) and it is through positive experiences of that kind that they develop their confidence to practice and to face new challenging situations. Medical culture places great importance on doctors' confidence being a reliable indicator of their competence; but recognises that practice also has to be situated within a framework of risk management and accountability to individual patients. How is this to be achieved when overconfidence is risky and lack of confidence has a negative impact on both patients and other health professionals?
Learning to assess and reassess one's own competence and its limits is a long and complex process, which becomes increasingly sophisticated as a doctor progresses through postgraduate education. Its reliability significantly depends on access to good supervision and feedback. Feedback which contributes to a trainee doctor's self-assessment may come from patient outcomes, informal discussions with other doctors or health professionals, periodic appraisals or meetings for signing the trainee's logbook. Informal feedback on the wards tends to be spontaneous and incidental, i.e. not the result of a reflective judgement: it is mostly received from more senior trainees rather than consultants, and more likely to be negative than positive. Formal feedback appears to vary considerably in quantity, quality and breadth of coverage. Even the best designed log-books focus on competence rather than performance. Techniques such as chart review which emphasise performance are rarely used in Britain outside General Practice. As reported in Chapter 4, there is sufficient cause for concern to suggest that research into the practice and effectiveness of supervision and feedback during postgraduate medical education is urgently needed. This should include the implicit delegation of certain supervisory and educational responsibilities to senior trainees. Should it be formalised, as in the US role of Chief Resident? Should it be accompanied by training? Which responsibilities could or should be delegated, and which should not?
The other key educational role is that of facilitating learning from clinical experience. For this there is a continuum of possible clinical educators from those who bring clinical and educational expertise to lectures, seminars or workshops for basic, higher specialist, or general practice trainees to more senior trainees who work very closely with their less experiences colleagues in the wards and are best positioned to take up `live' learning opportunities as they happen. In between are the consultants in their firm who have greater expertise and normally are reasonably accessible for discussing significant cases linked to their specialisms. In practice, specialist registrars will expect to learn most from the consultants, SHOs from registrars and PRHOs from SHOs. But these arrangements are informal. It could be a junior SHO who judges when a PRHO is working within their level of competence, and who might occasionally think about appropriate learning opportunities for which they have no formal responsibility; and junior SHOs have little preparation or experience for this role. Moreover, changes in rotation systems to reduce junior doctors' hours have resulted in many hospitals in working groups of house officers and registrars whose membership changes as frequently as every 6 weeks. The transient nature of these groups leaves little time for them to get to know each other well enough for mutual learning to reach an optimal level.
Chapter 4 reviewed a wide range of naturally occurring and deliberately created learning opportunities in clinical settings, for most of which there is good evidence of positive learning outcomes. The principal problems are those of creating greater awareness of these opportunities and developing clinical educators with the disposition to use them in developing educational practices appropriate to their circumstances. Irby (1995,1994a) provides specific advice on the flexible training of clinical teachers. The main constraint on such developments is the belief that they are impossible to implement while current service pressures on doctors' time persist. This management issue, which is not confined to medical education, requires more attention. Few Trusts have reliable mechanisms for incorporating clinical teaching into their organisation of professional time, though some may have formally agreed to do so. Nor are there any internal audit mechanisms at local level for monitoring and periodically evaluating a Trust's programme of professional education. Clinical Tutors have neither the time nor the authority to undertake such duties.
The role of mentoring has recently been discussed by Standing Committee on Postgraduate Medical and Dental Education (1999) and reviewed by Bligh (1999). There is little evidence about its use in medicine, unless one includes the peer-tutoring experiments among GPs reported in Chapter 6. Its use in business is frequently advocated and sometimes practised (Eraut et al., 1999), but has given rise to little research. The term is used in teacher education with a rather different and, in our view, less authentic meaning - that of a placement supervisor with a practice teaching role, who is often also an assessor. Given the difficulties discussed above of finding sufficient time for supervision, giving feedback and clinical teaching for postgraduate medical trainees, the introduction of mentoring as an additional role and obligation might not justify a high priority. However, mentoring might be particularly well suited to the support of doctors during the first few years after completion of postgraduate training. Both consultants and GPs could benefit from such support as they grow into their new roles and responsibilities, learn to work with new colleagues and to contribute appropriately to their Trust or General Practice, and take greater responsibility for organising their own lifelong learning.