Have you thought about how different professionals come to their clinical decisions and the implications of this?
The modern healthcare environment involves working with patients of high acuity and complexity with ever restricted time and resources. Because of this new graduates need to hit the ground running when they start work which means there is more demand on clinical reasoning throughout their education.
It’s well accepted that the more complex a scenario the more demand there is on clinical reasoning ability. However there is a lack of consensus on how clinical reasoning is defined and conceptualised despite this agreement. Terms like ‘decision making’, ‘critical thinking’, ‘problem solving’ and ‘diagnostic reasoning’ are often used interchangeably confusing matters. There is also a difference in clinical reasoning approach between healthcare professions and specialties confusing matters further.
Despite there being several recent systematic reviews exploring clinical reasoning approaches, there has yet to be any clear evaluation on whether students emerge from university training with the clinical reasoning skills required to work in a complex and uncertain healthcare environment. This is despite Universities needing to evaluate student ability prior to completion of training.
The experiential nature of clinical reasoning means that the classroom environment may not be the most suited to learning and evaluating effectiveness. Furthermore the complex nature of clinical reasoning as a skills means knowledge and cognitive capacity are not the focus, rather an ability to apply cognitive, psychomotor and affective skills are.
To date there hasn’t been much research which has synthesised the evidence on what student evaluation tools exist for clinical reasoning and without this information it’s possible that clinical reasoning is being evaluated without considering different aspects of the skill. Subsequently the question emerges, if we can’t evaluate clinical reasoning effectively then are we equipping our students with the skills they require?
With all this in mind a new systematic review published in the International Journal of Environmental Research and Public Health set out with the aim of systematically identifying the tools used to evaluate clinical reasoning and determine the constructs and tools intend to assess.
This systematic review used seven databases to search for peer-reviewed English language publications which were eligible for inclusion based on the following criteria:
- peer reviewed and published in English
- published between 2000-2018
- involved any health profession
- included pre-registration student education
- investigated clinical reasoning or related concepts
- the primary outcome was to develop or test an evaluation tool
Studies which used but did not develop or test an evaluation tool, or evaluate a tool only used outside of clinical placement or simulation settings were excluded. Disagreement about inclusion was resolved by consensus and the full search strategy is available.
The PRISMA guidance was not followed during the review, PROSPERO was not used to pre-register the protocol and it is unclear what tools were used to evaluate methodological quality and certainty of evidence obtained during this systematic review. This limits the quality of this systematic review.
In total 61 papers were included within this systematic review with the majority (n=28) related to medicine and (n=25) nursing with the remaining related to midwifery, physiotherapy, occupational therapy and pharmacy.
The two most common tools evaluated in the papers were the Script Concordance Test (SCT) and Lasater Clinical Judgement Rubric (LCJR) with 29 different tools being evaluated in total. The ‘script’ and ‘rubric’ based approaches are commonly used methods with many of the other approaches using variations of this approach.
The SCT is a test for predominantly diagnostic medical scenarios where the examinee’s answers are scored based on the level of agreement with responses judged by a panel of experts. The LCJR describes performance expectations, as well as language for feedback and assessment of predominately nursing students’ clinical judgement development in a clinical rubric.
Conceptual Foundations of The Clinical Evaluation Tools
The authors grouped included papers based on their construct, theoretical underpinning ,evaluation tool or measure and discipline/profession of the student and is fully presented in table 2 which can be found within the paper.
What is clear that even within profession there is a lack of agreement regarding definitions of ‘clinical reasoning’ and ‘critical thinking’ which gives rise to different evaluation tools and approaches to assessment.
Medicine tends to use problem-based approaches to clinical reasoning and evaluation through simulation whereas nursing uses a broad range of tool including mnemonics, scripts and scenario based approaches. This is similar for physiotherapy and occupational therapy who tended to use script based approaches. Despite which approach is used for clinical reasoning there is a starling lack of agreement in the framework underpinning clinical reasoning tools themselves. To reiterate this point, some tools evaluate clinical reasoning by proxy activities such as clinical documentation, rather than the constituent components of reasoning.
Inconsistency of Terminology and Limited Professional Crossover
It appears that what is meant but ‘clinical reasoning’ is different between professions which is demonstrated most clearly by Nursing & Midwifery being an outlier in the definition of the term. ‘Critical thinking skills’ and ‘clinical judgement’ is the focus of clinical reasoning during nurse education whereas is is much more of a generic term for other professionals.
Regardless of what the term is defined as there is a startling lack of evidence of how tools were created and if there is they were largely independent of each other resulting a limited application. This lack of consensus of defined terms and lack of cross-evaluation, particularly between professions, means there currently isn’t a clear way to consider how students from differing professions engage in clinical reasoning regarding the same patient scenario.
This has significant real world implications considering the diversification of healthcare roles particularly in primary care that may lead to communication challenges or have patient safety implications. If health professionals can speak the same language when discussing clinical reasoning the patients will receive better care.
Clearly more research is need to make sense of clinical reasoning processes between professions enabling a consensus of terms to be created and therefore a way of evaluating abilities. Once clinicians understand the routes of reasoning of how a decision was made, communication will improve and patients will benefit.