Hostname: page-component-7c8c6479df-7qhmt Total loading time: 0 Render date: 2024-03-28T13:59:22.576Z Has data issue: false hasContentIssue false

Competence or excellence? Invited commentary on … Workplace-based assessments in Wessex and Wales

Published online by Cambridge University Press:  02 January 2018

Femi Oyebode*
Affiliation:
University of Birmingham, the National Centre for Mental Health, the Barberry, 25 Vincent Drive, Edgbaston, Birmingham B15 2FG, email: Femi.Oyebode@bsmhft.nhs.uk
Rights & Permissions [Opens in a new window]

Summary

This commentary discusses the problems with workplace-based assessments and questions whether these methods are fit for purpose. It suggests that there is a risk that assessment methods that focus on competence may undermine the need for trainees to aspire to acquire excellent skills rather than merely be competent, which is no more than a rigid adherence to standardised and routinised procedures.

Type
Education & training
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Royal College of Psychiatrists, 2009

Workplace-based assessments (WPBAs) have increased in importance as the limitations of tests of competence such as objective structured clinical examinations have become more obvious. Thus, assessment methods that rely on standardised and objectified tasks in a controlled laboratory-like environment are returning full circle to the assessment of trainees in the real world of patients and the workplace. Reference Van Der Vleuten and Schuwirth1 The concern about the variance introduced by real cases and the emphasis on the desirability of ‘standardised patients’ has lessened with the use of tools such as the mini-Clinical Evaluation Exercise (mini-CEX) in work-based assessments. Reference Norcini, Blank, Arnold and Kimbal2 Nonetheless, there is insufficient evidence that these new methods are fit for purpose, at least in psychiatry. Reference Searle3

Exam competence v. clinical performance

The arguments in favour of WPBAs derive from the conceptual distinctions that Miller Reference Miller4 drew attention to, namely between knowing, knowing how, showing how, and doing. These distinctions emphasise that competence (showing how), which is demonstrated in an artificial examination setting, may not reflect actual clinical practice, which is clinical performance in the workplace. The aim ultimately is to assess real performance in the workplace, hence workplace-based assessments. The issue though is how far the face validity of these new assessments, the idea that assessments of real world encounters with patients are superior to objectified and artificial world encounters, is accompanied by reliable and worthy results. The genuine fear is that WPBAs may be unreliable, lacking in rigour and not fit for purpose, whatever educational principles say or demonstrate.

Assessors’ training

Part of the problem is undue reliance on assessments by assessors inadequately trained in the use of the relevant assessment tools and also having little knowledge of the methods under consideration. This is what the papers by Babu et al Reference Babu, Htike and Cleak5 and Menon et al Reference Menon, Winston and Sullivan6 demonstrate most clearly. Babu et al's finding of significant proportions of educational supervisors who are yet to be trained confirms what was already suspected by interested parties. Some of the quotations from their study also draw attention to the doubts and reservations that educational supervisors have about the new methods. However, there are other problems too. The assessors can be doctors or not, and for more junior doctors need not be consultants at all. These variations must certainly influence the reliability of the scores awarded and call into question the purpose of the tools. It certainly raises questions about what aspects of clinical skills non-doctors can reliably rate with or without training, an issue discussed by Menon et al.

Bureaucracy

Furthermore, to the degree that these assessments are required as part of a culture of collecting evidence for a portfolio, there is a sense in which they are part of a bureaucratic process that is gradually becoming decoupled from the primary purpose, which is determining whether an individual doctor is good and safe enough for independent practice. As ever, the risk is that the token will come to be taken as the real thing. Our predilection as human beings to worship idols, or tokens, often surfaces in the most unusual places.

Interpreting the assessments

Finally, and more serious, there is the conflation of formative and summative assessment methods. Tools that are ideal for determining strengths and weaknesses of a trainee that ought to be utilised in guiding training and as diagnostic tools have come to stand as part of the evidence of competence and collected as such. The trainees in both surveys Reference Babu, Htike and Cleak5,Reference Menon, Winston and Sullivan6 recognise these problems and are at best ambivalent about the value of these assessment methods.

Shifting the focus

There is, though, a deeper problem. It can be argued that there is a disproportionate preoccupation with competence rather than expertise or excellence in the current system of training and appraising trainees. Any system that aims for a rigid adherence to conscious deliberation, to standardised and routinised procedures, for that is what competence is, is seeking not to institute proficiency or expertise but something less worthwhile and perhaps even damaging to the profession. There seems little doubt that the aim of these methods is to recognise, identify and sign off competence. There is a need for a greater understanding of the cognitive aspects of expertise, Reference Dreyfus and Dreyfus7 an understanding that will eventually lead to the recognition and acceptance that expertise requires judgement which is context dependent. Experts rely on intuitive appraisals of clinical situations, on automated algorithms that often defy verbal exposition. They tend to revert to laboured and slow analytic modes of thinking only in the face of novel situations. Once the nature of the acquisition of expertise is grasped, the implications for the overall goal of training in medicine will become clearer and it may be that these modern assessment methods aim far too low and thereby stultify motivation.

References

1 Van Der Vleuten, CPM, Schuwirth, LWT. Assessing professional competence: from methods to programmes. Med Educ 2005; 39: 309–17.Google Scholar
2 Norcini, JJ, Blank, LL, Arnold, GK, Kimbal, HR. The mini-CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med 1995; 123: 795–9.Google Scholar
3 Searle, GF. Is CEX good for psychiatry? An evaluation of workplace-based assessment. Psychiatr Bull 2008; 32: 271–3.CrossRefGoogle Scholar
4 Miller, GE. The assessment of clinical skills/competence/performance. Acad Med 1990; 9: s63s67.Google Scholar
5 Babu, S, Htike, MM, Cleak, VE. Workplace-based assessments in Wessex: the first 6 months. Psychiatr Bull 2009; 33: 474–8.Google Scholar
6 Menon, S, Winston, M, Sullivan, G. Workplace-based assessment: survey of psychiatric trainees in Wales. Psychiatr Bull 2009; 33: 468–74.Google Scholar
7 Dreyfus, H, Dreyfus, S. Expertise in real world contexts. Organ Stud 2005; 26: 779–92.CrossRefGoogle Scholar
Submit a response

eLetters

No eLetters have been published for this article.