Authors: Amber Shamim Sultan ( Aga Khan University Hopspital, Karachi, Pakistan )
Rahila Ali ( Department for Educational Development, Aga Khan University Hospital, Karachi, Pakistan )
Sara Shakil ( Department of Educational Development and Medicine, Aga Khan University, Karachi, Pakistan. )
Rehan Nasir Khan ( Department of Surgery, Aga Khan University, Karachi, Pakistan. )
January 2021, Volume 71, Issue 1
Review Article
Abstract
The apprenticeship model has been used for long in surgical training. It initially provides opportunity to the trainee to observe the attending surgeon, followed by gradual introduction to surgical technique under direct supervision and later with detached supervision. The attending provides informal feedback at different intervals to the trainee. Several changes have been made in postgraduate programmes with a shift towards using workplace-based assessment tools for formative and summative evaluation of the trainee's clinical skills.
Keywords: Surgical training, Assessment, Workplace-based assessment, Clinical skills assessment, Formative assessment, Feedback.
Introduction
Assessment plays a vital role in trainees' progression. Several tools have been developed for improving their clinical skills. Tools that have been used to assess trainees' cognitive knowledge are written or oral exams, whereas workplace-based assessment (WPBA) methods focus on assessing their clinical skills either using simulated patients or in a with real patients along with providing constructive feedback on it.1
In postgraduate training, the reason for introducing WPBA is to assess the "does" level of Miller's pyramid and to provide constructive feedback in order to achieve competencies. Some of the common tools used in postgraduate training are mini-clinical evaluation exercise (MINI-CEX), case-based discussion (CBD) and mini-peer assessment tool (MINI-PAT). Tools like surgical direct observation of procedural skills (S-DOPS) and procedure-based assessments (PBA) are specifically used for surgical training.2,3
The current review article was planned to provide an overview of some of the WPBA tools that can be implemented to improve the skills of trainees.
MINI-CEX
The MINI-CEX has been used in undergraduate as well as postgraduate training institutions around the world and is considered a valuable tool for formative assessment. It was developed in the United States, and was first introduced by the American Board of Internal Medicine in 1995 for the assessment of postgraduate trainees.4
In traditional CEX, a qualified physician observes a trainee's performance while taking history from a patient, doing complete physical examination, presenting findings, providing summary of the patient encounter along with next steps, including clinical diagnosis and management plan. After this encounter, the evaluator provides feedback and documents experience on a form. Later, the trainee gives a written record of the patient work-up to the evaluator for review. The total duration of evaluator-trainee interaction is around 2 hours.5 However, there are a few limitations of traditional CEX. Firstly, the trainee handles only one patient although patient-problems vary considerably. Secondly, only one evaluator observes the trainee's performance. Thirdly, attending-patient encounters are relatively of shorter duration.5
The MINI-CEX is designed in such a way that different evaluators can evaluate the same trainee at different intervals.6 These clinical encounters can be done in a variety of settings, such as clinics, wards and emergency departments (EDs).7
Both new and follow-up patients can be clerked for MINI- CEX. A variety of clinical problems can be used for assessing the trainee, such as a patient presenting with chest pain, abdominal pain, cough, shortness of breath, dizziness, backache, or a patient may present with clinical problems, such as angina, diabetes mellitus, hypertension, chronic obstructive airway disease etc.6 The assessor scores different aspects based on clinical encounter with the patient by using a 9-point rating scale where 7-9 is highly satisfactory, 4-6 is satisfactory and 1-3 is below expectation. The different aspects of the encounter scored are history-taking skill, professionalism, clinical judgment, counselling, physical examination, organisation and efficiency. The encounter lasts for 15 minutes, followed by feedback of 5-10 mins duration. It is recommended to have several clinical encounters with different experts as they interact with several patients who pose a wider range of problems.7
The traditional way of assessment does not provide opportunity to the trainees to have direct observation of their performance by different evaluators and receive feedback. Hence, it has been suggested to use the MINI-CEX tool for clinical teaching and assessment.8 It not only permits observation along with effective feedback for the skills to be observed, but also ensures that different evaluators observe the trainee's clinical skills at different point of their training. Moreover, the observation and feedback occur with a variety of patient's clinical presentation in different clinical settings. This assessment method has been considered a reliable technique of for assessing undergraduate as well as postgraduate trainees. Roughly 4 clinical encounters are adequate to achieve a 95% confidence interval (CI) <1 on the 9-point scale, and approximately 12-14 are required for a reliability coefficient of 0.7. Apart from postgraduate training, the MINI-CEX has been effectively implemented in undergraduate medical education.8 In this setting, the duration of observation along with feedback varies from 30 to 45 minutes.9 Furthermore, MINI-CEXs evaluate the trainee's ability to prioritise and focus on diagnosis and come up with the appropriate management plan within the context of real clinical practice.
MINI-CEX is well accepted by residents and faculty as a formative assessment tool. It is feasible to utilise MINI-CEX for WPBA of postgraduate students of surgery. MINI-CEX is considered a valid and reliable assessment tool for assessing the trainee's competence.
DOPS
DOPS is considered a standard tool to test the "does" level of the Miller's Pyramid of Clinical Competence.10 It was made current in 2005 by the Foundation Programme in the United Kingdom.11 It is a tool where a trainee selects a procedure from an approved list and an assessor assesses them by directly observing their on-job performance of technical skills.12 The duration of each encounter is 20 minutes, with 15minutes for observation and 5 minutes for constructive feedback. Trainees are scored on a 6-point rating scale where 1-2 is below expectation, 3 is borderline, 4 reflects achieving the expected level, and 5-6 is considered above the expected level. Trainees are assessed more than six times a year. Postgraduate medical education is increasingly making use of DOPS as a WPBA tool.12 Almost two decades ago, a new variant of DOPS, known as objectively structured assessment of technical skills (OSATS), came in to existence due to the evolving nature of technical training and advanced surgical techniques in medical education.13 OSATS has good psychometric properties compared to the more traditional modes of assessment, like logbooks which demonstrate poor content validity regarding the operative capability of the trainees.13
DOPS is predominantly used in surgical subspecialties, including operation theatres (OTs), labour rooms and EDs compared to any other area in general practice.1,10 The procedures range from basic surgical suturing skills to more complex skills, like obtaining biopsies, autopsy and histological procedures, technical and operative skills and insertion of intravenous (IV) lines. The main components of DOPS include consent-taking, demonstration of understanding regarding the indications, technical stability, clinical judgment, awareness of complications, professionalism and communication skills.11
The combination of direct observation of procedure and immediate feedback is the major hallmark of DOPS. It has an effective role in facilitating students' learning. It provides an opportunity to the students to receive constructive feedback on their performance immediately upon completion of the required task. It is found that one of the possible reasons for students' performing good on DOPS is internal motivation and increased confidence. However, there are some growing challenges pertaining to the implementation of DOPS which include unfamiliarity among faculty and residents, inadequate training of assessors and residents and time constraints in busy hospital settings.12 One of the major challenges is trainee`s consciousness which makes DOPS a measure of competence instead of assessment of performance. Lastly, the trainee-assessor relationship can serve as a source of judgment bias, and, hence, there is need for multiple internal and external assessors.
According to the Royal College of Physicians, DOPS is considered a fairly valid and reliable tool, especially in comparison with logbooks.2 It sequentially samples the curricular content based on appropriate level of training and shows good content validity.11 DOPS has high face validity because the trainees are observed directly and provided feedback on their performance in a busy clinical setting. However it is not very feasible to implement DOPS in a busy workplace setting due to time pressures, unavailability of trained assessors and resource constraints for the hospital administration. Lastly, due to the provision of immediate constructive feedback to the trainees by the assessors, DOPS is a fairly acceptable tool to both the examinees and the examiners, and has high educational impact.10 There are several concerns regarding the reliability of DOPS. Case specificity is found to be a major component that affects the reliability of DOPS. Inter-case variation is another factor identified for lower reliability. In order to achieve sound reliability, DOPS should be implemented with fewer cases and assessors compared to MINI-CEX. Despite evidence-based literature on the characteristics, components and process of DOPS, further studies are required to prove its usefulness as a WPBA tool.10
CBDs
Traditionally, cases have been used during training by consultants for discussions after the rounds and outpatient consultations. CBD builds on these traditional methods by allowing for a structured discussion in a separate time where one-to-one discussion with the trainee can take place.14
Case discussion in CBDs is semi-structured, performance-based assessment tool which aims at assessing the clinical reasoning of the trainees in order to understand the justification of decisions made in real practice. Typically, an encounter lasts 20-30 minutes with 5 minutes for feedback.
The domains that can be assessed using CBD are clinical reasoning, clinical assessment, management planning including investigation, follow-up, referral as well as communication, professionalism etc.14-16 The focus is on the assessment of application of knowledge. In one encounter, the focus of discussion is usually one or two aspects of the case.14,15
The case chosen for discussion should be thought-provoking, with some conflicts in decision-making.14 Either the supervisor or the trainee may choose the case.17 Protected time and environment should be ensured for an uninterrupted session.14,16 The trainee presents a review of the case, and both the Supervisor and the trainee mutually decide on an area to focus during the discussion. The discussion may be initiated with the review of patients' notes assessing the trainee's knowledge and clinical decision-making skills. The trainees are aware of their developmental needs and should reflect on their own performance, identifying their own strength and areas of improvement. The role of the supervisor is to explore the rationale of trainee's decisions, avoiding a mini-lecture. Using triggers and open-ended probing questions should be encouraged. Closed-ended direct, knowledge-based questions are best avoided. The questions should be designed to seek evidence of competence and not a test of knowledge.1,18 The session should conclude with specific and non-judgmental feedback focussing on areas that went well and what might have been done better or differently and with an agreement on a future learning plan.17 A structured form is used by the supervisor to record the encounter, noting the level of the trainee, the setting, whether inpatient or outpatient, etc., and the level of difficulty of the case. Based on the discussion held, the trainee's competence is rated along with the strengths and the areas for development. Both the trainee and the supervisor keep a record of the encounter.14-16
The strength of CBD lies in the fact that it is a performance-based formative assessment, encouraging the trainee to reflect on his own performance, and embedding effective non-judgmental specific feedback.15-17
Successful implementation requires active faculty and trainee participation as like other WPBA tolls, CBD is also trainee-led. Time limitations and lack of training are barriers in the way of successful implementation.16
It is required as well as expected that trainees participate in multiple patient encounters with multiple evaluators during their training. CBD is a valid assessment tool correlating with other measures, like chart audit and with scores on internal certification.15 Some studies suggest 6 CBDs are required in a year.19 The Royal College of General Practitioners (RCGP) recommends that a minimum of 4 CBDs are carried out in the first two years of training; 2 in each 6-month period.18 Whereas the training matrix of the Royal College of Obstetricians and Gynaecologists (RCOG) recommends a minimum of 8 CBDs spread over a year in each year of training.20 A small number of CBDs will not be representative of the curriculum, and, hence, the content validity will be limited. Reliability of CBD is influenced by raters' training, uniformity of assessment and degree of standardisation of the trainee.19
Surgical trainees regard CBD as a useful tool for learning as it allows discussion of complicated cases, encourages reflection and promotes higher order thinking.21
Multisource feedback
Multisource feedback (MSF) is another assessment tool which relies upon gathering input from multiple raters. It is often also referred to as 360 degree evaluation or multi-rater feedback, as perspectives and opinions are collected from various 'viewpoints' surrounding the subject in the workplace environment.22-25 These questionnaires may be distributed along the various levels of hierarchy, including supervisors, peers and colleagues, both clinical and non-clinical, students and even patients.24,26,27 Questionnaires are kept brief, and are typically the same for all the raters to ensure objectivity of the assessment.26,27 The raters are also routinely kept anonymous in order to prevent professional repercussions and to ensure unbiased feedback.
This tool is useful for assessing observable behaviours, such as interpersonal and problem-solving skills.27,28 And it is more superior to simple one-on-one feedback. Proforma's are used for evaluation as they can gather a more comprehensive view of the subject's competencies by taking into consideration reviews from multiple facets of their interactions. MSF is particularly important when either direct supervision is not possible, or the subject is part of a larger group.24,29 When considering surgical training, one must keep in mind that a training supervisor will not always be able to directly observe each and every interaction the trainee has, and, hence, reliance of a tool like MSF is important for the collection of such intricate information.24,27
Due to its flexibility in terms of design and execution, it has found popularity in the medical community as a means of workplace assessment. In the clinical setting, MSF has been shown to aide in monitoring performances and eventual self-improvement.27-29 In fact, it is commonly used for appraisal and realisation purposes. However, it also is applied in the evaluation and monitoring of core competencies in trainees as questionnaires can easily be adjusted to assess for a vast range of non-procedural competencies.27,28
Studies have shown that the incorporation of MSF in surgical training offers reliable, feasible and valid assessment of trainees on various non-technical competencies without any undue burden on the reviewers.22,27-30 These key competencies include communication skills, both verbal and written, professionalism, medical expertise, humanism, collegiality and the capacity to learn. Such competencies may often be overlooked by other evaluation techniques. Miller et al. suggested that benefits are higher when feedback is accurate, and helps identify the weakness and strengths of the candidate.8 Such well-rounded feedback has been found to provoke contemplation and initiate positive behavioural change amongst surgical trainees.22,26,31
One major limitation of MSF utility in surgical training is that it has been found to be unsuitable for the evaluation of procedural competencies.22,27,29 This cannot be used as the sole evaluation methodology in such programmes. Furthermore, there is the potential of biases due to facilitation and dishonesty while rating a candidate. Factors, like favouritism or general dislike, may influence the raters' decisions. Though, through the collection of data from varying sources may help overlook minor grudges in larger circles, this may be a problem amongst assessments in smaller groups.25,28,31
Conclusion
To make WPBA more effective, it is essential for all the trainees to be assessed and to receive feedback after observation. For successful implementation, competencies should be matched against the best-fit tool. Multiple encounters with multiple raters should be practised to increase validity and reliability of the judgment. The feedback given to the trainees should be reliable and should focus on skills that are being observed. Nonjudgmental and timely feedback is more effective. The onus of assessment for most WPBA rests with the trainee. In this context, for successful implementation of WPBA, it is imperative to train the faculty for active participation and contribution. Programme directors need to build certain mechanisms in educational programmes to make them more effective and helpful for trainees and enhance further learning.
Disclaimer: None.
Conflict of Interest: None.
Source of Funding: None.
References
1. Mortaz Hejri S, Jalili M, Shirazi M, Masoomi R, Nedjat S, Norcini J. The utility of mini-Clinical Evaluation Exercise (mini-CEX) in undergraduate and postgraduate medical education: protocol for a systematic review. Syst Rev 2017;6:146. doi: 10.1186/s13643-017-0539-y.
2. Welchman SA. Educating the Surgeons of the Future: The Successes, Pitfalls and Principles of the ISCP. Ann R Coll Surg Engl 2012;94:1-3. DOI: 10.1308/147363512X13189526438837
3. Evgeniou E, Peter L, Tsironi M, Iyer S. Assessment methods in surgical training in the United Kingdom. J Educ Eval Health Prof 2013;10:2. doi: 10.3352/jeehp.2013.10.2.
4. Singh T, Sharma M. Mini-clinical examination (CEX) as a tool for formative assessment. Natl Med J India 2010;23:100-2.
5. Noel GL, Herbers JE Jr, Caplow MP, Cooper GS, Pangaro LN, Harvey J. How well do internal medicine faculty members evaluate the clinical skills of residents? Ann Intern Med 1992;117:757-65. doi: 10.7326/0003-4819-117-9-757.
6. Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: a method for assessing clinical skills. Ann Intern Med 2003;138:476-81. doi: 10.7326/0003-4819-138-6-200303180-00012.
7. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini-CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med 1995;123:795-9. doi: 10.7326/0003-4819-123-10-199511150-00008.
8. Malhotra S, Hatala R, Courneya CA. Internal medicine residents' perceptions of the Mini-Clinical Evaluation Exercise. Med Teach 2008;30:414-9. doi: 10.1080/01421590801946962.
9. Hauer KE. Enhancing feedback to students using the mini-CEX (Clinical Evaluation Exercise). Acad Med 2000;75:524. doi: 10.1097/00001888-200005000-00046.
10. Naeem N. Validity, reliability, feasibility, acceptability and educational impact of direct observation of procedural skills (DOPS). J Coll Physicians Surg Pak 2013;23:77-82.
11 Khan MA, Gorman M, Gwozdziewicz L, Sobani ZA, Gibson C. Direct Observation of Procedural Skills (DOPS) as an assessment tool for surgical trainees. J Pak Med Stud 2013;3:137-40.
12 Shafiq Z, Ullah H, Sultana F. Challenges in implementing direct observation of procedural skills as assessment tool in postgraduate training of general surgery. Ann Pak Inst Med Sci 2019;15:66-70.
13 Noordin S, Allana S. OSATS for total knee replacement: Assessment of surgical competence in the operating room. J Pak Med Assoc 2015;65(Suppl 3):s52-4.
14 Primhak R, Gibson N. Workplace-based assessment: how to use case-based discussion as a formative assessment. Breathe (Sheff) 2019;15:163-66. doi: 10.1183/20734735.0209-2019.
15 Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach 2007;29:855-71. doi: 10.1080/01421590701775453.
16 Jyothirmayi R. Case-based discussion: assessment tool or teaching aid? Clin Oncol (R Coll Radiol) 2012;24:649-53. doi: 10.1016/j.clon.2012.07.008.
17 Mehta F, Brown J, Shaw NJ. Do trainees value feedback in case-based discussion assessments? Med Teach 2013;35:e1166-72. doi: 10.3109/0142159X.2012.731100.
18 The Royal College of General Practitioners (RCGP). Case-Based Discussion. [Online] 2020 [Cited 2020-December 09]. Available from URL: https://www.rcgp.org.uk/training-exams/training/new-wpba/cbd.aspx
19 Williamson JM, Osborne AJ. Critical analysis of case based discussions. Br J Med Pract 2012;5:514-5.
20 The Royal College of General Practitioners (RCGP). Training matrix Annual expectation of educational progression ST1 to ST7 in O & G for 2018-19. [Online] 2020 [Cited 2020 December 09]. Available from URL: https://www.rcog.org.uk/globalassets/documents/careers-and-training/assessment-and-progression-through-training/training-matrix.pdf
21 Phillips A, Lim J, Madhavan A, Macafee D. Case-based discussions: UK surgical trainee perceptions. Clin Teach 2016;13:207-12. doi: 10.1111/tct.12411.
22 Violato C, Lockyer J, Fidler H. Multisource feedback: a method of assessing surgical practice. BMJ 2003;326:546-8. doi: 10.1136/bmj.326.7388.546.
23 Lockyer JM, Violato C, Fidler H. The assessment of emergency physicians by a regulatory authority. Acad Emerg Med 2006;13:1296-303. doi: 10.1197/j.aem.2006.07.030.
24 Rodgers KG, Manifold C. 360-degree feedback: possibilities for assessment of the ACGME core competencies for emergency medicine residents. Acad Emerg Med 2002;9:1300-4. doi: 10.1111/j.1553-2712.2002.tb01591.x.
25 Overeem K, Wollersheim H, Driessen E, Lombarts K, van de Ven G, Grol R, Arah O. Doctors' perceptions of why 360-degree feedback does (not) work: a qualitative study. Med Educ 2009;43:874-82. doi: 10.1111/j.1365-2923.2009.03439.x.
26 Nurudeen SM, Kwakye G, Berry WR, Chaikof EL, Lillemoe KD, Millham F, et al. Can 360-Degree Reviews Help Surgeons? Evaluation of Multisource Feedback for Surgeons in a Multi-Institutional Quality Improvement Project. J Am Coll Surg 2015;221:837-44. doi: 10.1016/j.jamcollsurg.2015.06.017.
27 Al Khalifa K, Al Ansari A, Violato C, Donnon T. Multisource feedback to assess surgical practice: a systematic review. J Surg Educ 2013;70:475-86. doi: 10.1016/j.jsurg.2013.02.002.
28 El Boghdady M, Alijani A. Feedback in surgical education. Surgeon 2017;15:98-103. doi: 10.1016/j.surge.2016.06.006.
29 Miller A, Archer J. Impact of workplace based assessment on doctors' education and performance: a systematic review. BMJ 2010;341:5064. doi: 10.1136/bmj.c5064.
30 Neshatavar R, Amini M, Takmil F, Tabei Z, Zare N, Bazrafcan L. Using a Modified 360° Multisource Feedback Model to Evaluate Surgery Residents in Shiraz University of Medical Sciences. Future Med Educ J 2017;7:30-4.
31 Ferguson J, Wakeling J, Bowie P. Factors influencing the effectiveness of multisource feedback in improving the professional practice of medical doctors: a systematic review. BMC Med Educ 2014;14:76. doi: 10.1186/1472-6920-14-76
Journal of the Pakistan Medical Association has agreed to receive and publish manuscripts in accordance with the principles of the following committees: