Ep32: Diagnostic Error Part 1—Cognitive Bias

Ep32: Diagnostic Error Part 1—Cognitive Bias
Date:
20 December 2017
Category:

This is the first of two episodes about errors in diagnostic reasoning. Misdiagnosis or delayed diagnosis occurs in 10-15 per cent of acute presentations, although fortunately only a tenth of these lead to serious consequences. But of concern is the fact that this figure hasn't changed in three decades, despite progress in clinical knowledge. Errors in diagnostic reasoning occur at the same rate in senior clinicians as they do in juniors, even though mistakes from poor examination or knowledge become less frequent as one gains experience.

Compared to problems in maths or physics, diagnostic problems are thought of as ill-structured: because information isn't readily available, the problem can keep changing and often you're not certain you've reached a solution and are free to stop searching. Cognitive errors result from jumping to conclusions on the basis of intuition and incomplete information. There are a hundred different types of such bias. On this episode, the most common types will be discussed, as well as strategies to force a more considered process of diagnostic reasoning.

In about two thirds of cases, systems problems like design and workflow contribute to diagnostic error. These will be discussed in the second episode of this series.

Claim CPD credits via MyCPD for listening and using resources related to this episode listed below.

Credits

Guests
Dr Nicolas Szecket FRACP (Auckland City Hospital)
Dr Arthur Nahill FRACP (Auckland City Hospital).

Production
Written and produced by Mic Cavazzini. Music courtesy of Mystery Mammal ('To be Decided,' 'Data'), RGIS VICTOR ('Lampagisto') and Lobo Loco ('Spook Castle'). Image courtesy of iStock. Executive producer Anne Fredrickson.
Editorial feedback for this episode was provided by RACP members Dr Paul Jauncey, Dr Alan Ngo, Dr Katrina Gibson. Dr Marion Leighton, Dr Michael Herd and Dr Joseph Lee.

References

Appendix
Interview Outtakes Containing a Worked Case Study [RACP]

Related Podcasts and Videos
Interview with Gurpreet Dhaliwal [IMReasoning]
Interview with Larry Weed [IMReasoning]
Debrief from Diagnostic Error in Medicine Conference 2017 [IMReasoning]
Cognitive Debiasing, Situational Awareness and Preferred Error [Emergency Medicine Cases]
Cognitive Decision Making and Medical Error [Emergency Medicine Cases]
Diagnostic Error with Mark Graber [onthewards]
What Were You Thinking? [Ian Scott, RACP Congress 2017]

Journal Articles
Diagnostic Error [Kuhn, AEMJ]
Diagnostic Error in Internal Medicine [Graber et al, Arch Intern Med]
Premature Closure? Not So Fast [Dhaliwal, BMJ]
Cognitive Debiasing 1: Origins of Bias and Theory of Debiasing [Croskerry et al, BMJ Quality and Safety]
Cognitive Debiasing 2: Impediments to and Strategies for Change [Croskerry et al, BMJ Quality and Safety]
The Importance of Cognitive Errors in Diagnosis and Strategies to Minimize Them [Croskerry, Academic Medicine]
From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making [Croskerry, NEJM]
Cognitive Interventions to Reduce Diagnostic Error: A Narrative Review [Graber, BMJ]
The Accuracy of Bedside Neurological Diagnoses [Chimowitz et al, Annals of Neurology]
Cognitive interventions to reduce diagnostic error: a narrative review [Graber et al, BMJ]

Transcript

MIC CAVAZZINI: Welcome to Pomegranate Health, I’m Mic Cavazzini. Before we start the show, an invite to any members of the RACP keen to provide advice on development of the podcast. We need avid listeners to help advise on future topics, suggest experts to interview, and provide feedback on audio before each episode is published. All this occurs by email, and it’s not a huge commitment of time. Please go to the website for more information about the application process which ends mid-January. Now onto the story.

This is the first of two episodes about diagnostic reasoning. Compared to maths or physics, diagnostic problems are described as ill-structured because information isn’t readily available, the problem can keep changing and you’re often not certain you’ve reached a solution and are free to stop searching.

Diagnostic errors are most common in general practice, internal medicine, and emergency medicine. This is unsurprising given these are the areas where the most undifferentiated patients present. Misdiagnosis or delayed diagnosis occurs in 10-15 per cent of acute presentations, although fortunately only a tenth of these lead to serious consequences for the patient. But this figure hasn’t budged in three decades, despite progress in other clinical areas.

In the next episode, we’ll look at the design and workflow problems that might explain this fact, but today we’ll talk about the cognitive biases that can derail logical reasoning for the individual clinician. For this I went to Auckland to meet two physicians who have an entire podcast series on this very subject, called IMReasoning.

NIC SZECKET: So, my name is Nic Szecket. I am a general internist. I trained in Toronto, Canada, and I’ve been living in New Zealand now for seven years, and my passion is medical education and clinical reasoning.

ART NAHILL: I am Art Nahill. I trained as a general internist in Boston and moved to New Zealand 12 years ago with my Kiwi wife and two children and have been, for the last several years, Director of Medical Education here within General Medicine. I also do lots of creative writing, so that’s me.

MIC CAVAZZINI: Before we begin, maybe just explain why you felt the need to start your own podcast about diagnostic reasoning.

ART NAHILL: Well I think in the large part it’s because differential diagnosis building is a skill that is not taught very well at all, even in current modern day medical schools. And many of us, myself included, floundered for many, many years unnecessarily thinking that we were stupid, that we just didn’t have what it took to be a good doctor, when really what it took was a system to develop differential diagnoses.

MIC CAVAZZINI: OK, let’s start with some cognitive psychology. The theory goes that all the conditions you’ve studied or had experience of are schematised in your long-term memory as a library of ‘illness scripts.’ Then the case presentation before you triggers recall of a handful of scripts that might match that pattern.
As a clinician becomes more experienced, they have a more varied library of illness scripts. Occasionally a physician will miss a diagnosis of some rare disease because that unique script was missing from their library. But most diagnostic errors involved misdiagnosis of common diseases. So, what does this say about the nature of the cognitive problem?

ART NAHILL: Well, common diseases often present uncommonly and that’s where the errors come in. When we’re trained in medical school, we tend to have disease specific illness scripts so congestive heart failure presents in this way, pulmonary embolus presents in this way. But when somebody presents to ED, for example, they don’t come and say, “Doctor, I’ve got heart failure,” right? They come with shortness of breath. When we get out and we begin to practice, we realise that we need to refile our illness scripts so they are filed under presentations.

MIC CAVAZZINI:  It’s almost like the more experience you get, the more you’re able to cross-reference these markers.

ART NAHILL: That’s exactly what it is, we develop a system of cross-referencing that’s much more sophisticated as we gain experience.

MIC CAVAZZINI: On that same idea that you’ve raised Art—studies over many years show that it’s always the same handful of conditions which are missed: pulmonary embolism, drug reactions, acute coronary syndrome and certain malignancies. It’s such a varied list of conditions. What do they have in common—it’s that they can present very subtly or across a spectrum.

NIC SZECKET: But I think that gets to what Art was saying, is that if an acute coronary syndrome or an acute myocardial infarction presents as 60-year-old male with a history of diabetes presents with crushing retrosternal chest pain, shortness of breath and collapse or something like that, the acute MI is not going to get missed in those situations. It’s when it presents as vague epigastric pain or—

ART NAHILL: —shoulder pain or nausea and vomiting—

NIC SZECKET: —and so if we could get people to recognise more automatically that whenever you see someone who presents with shoulder pain, you should think, on top of other diagnoses, you should think maybe it’s an MI.

MIC CAVAZZINI: There was a nice example you gave in one of your podcasts about, in order to slow yourself down, you sort of do that miniaturisation, fly through the body and go, “OK, there’s this system, there’s a skeletal system…”

ART NAHILL: I think that’s where I referred to the magic school bus.

MIC CAVAZZINI: Yeah, do you want to quickly…?

ART NAHILL: Sure, so when we’re teaching fourth year medical students, or even some of our juniors, they’ve never really been taught any formal way, or formal structures to use when they’re trying to come up with a differential diagnosis. They often at times don’t even know where to begin and so most of them rely on acronyms. I don’t use acronyms, I can’t remember what they all stand for.

So for something like shortness of breath where it’s very undifferentiated, both Nick and I suggest that people think first of a systems approach and then within that system begin to think anatomically. I sort of start on the outside and I work my way in very deliberately to the inside—so I think about for the respiratory system, the outer most part being the chest wall, then there’s the pleura, the pleural space, the alveoli, the interstitium, the airways both small and large, the vasculature.

And I think very, very simply, what can go wrong pathophysiologically with those things that could cause somebody to feel short of breath? Again, they don’t have to explore all of them, but it at least gives them something they can potentially use when they’re facing an undifferentiated symptom.

MIC CAVAZZINI: A prevalent model in cognitive psychology is the dual process theory, popularised by the book Thinking Fast and Slow by Daniel Kahneman. Fast thinking is known as ‘System 1.’ It’s said to involve subconscious pattern-recognition strategies and is shaped by a lifetime of associative memory.

But jumping to conclusions based on these intuitions can lead to cognitive error. By contrast, ‘System 2’ thinking is more methodical and rational. It takes time and cognitive effort to work through, however, so we end up using System 1 thinking 90 per cent of the time. I asked Nic Szecket to share his ‘gunslinger’ analogy.

NIC SZECKET: System 1 in the gunslinger analogy, System 1 would be the guy who shoots from the hip, you know—big shotgun, the first thing that moves, shoots from the hip, fires. Whereas System 2 would be the sniper—so, takes into account the distance, the wind speed, the movement, uses careful calibration and doesn’t actually fire until they’ve thought about it more carefully.

MIC CAVAZZINI: But it takes time to set that up.

NIC SZECKET: It does, yeah.

MIC CAVAZZINI: And the two systems we’re talking about, it’s not just pop psychology in black boxes, there’s good evidence from fMRI studies and behavioural testing that different neural pathways exist, and diagnostic reasoning involves a constant ebb and flow between these two processes. But when intuition leads us to ignore important information, that’s what we call cognitive bias and Gurpreet Dhaliwal, who you interviewed recently, reminds us in the BMJ that ‘heuristics are not a ‘bug’ in our neural software, they’re an essential feature of the program. The vocabulary of bias makes us forget that they work most of the time.’ Is that a fair—

ART NAHILL: —yeah, I think System 1 thinking gets a bad rap, to be honest. We spend most of our time in it because it’s so efficient and it’s usually right and evolutionarily we probably wouldn’t have developed such a dependence upon System 1 if it was a horrible failure. It’s just that when you’re dealing with medicine and people’s lives and illness, you want to try to bypass all of those built-in little glitches in the system. But the system generally works pretty well.

MIC CAVAZZINI: When you’re escaping sabre-toothed tigers…

ART NAHILL: It works great. You don’t want to sit there and debate, you know, ”Is that animal with the large fangs and that spots, does that really look like a…”

MIC CAVAZZINI: “This footprint could be a large cat…”

ART NAHILL: Right, you just want to react and you want to do it quickly.

NIC SZECKET: I don’t know, I think the division between 1 and 2 is probably not as clear as we make it out to be. It’s a model to think about the different processes, but we may not be able to ever separate what’s actually happening from minute to minute.

MIC CAVAZZINI: One study examined the types of diagnostic errors made by junior doctors, senior residents and consultants in admissions to a neurology ward. And, as you would expect, errors resulting from poor examination technique or what they called an inadequate fund of knowledge, became less frequent over the career, but errors in diagnostic reasoning were still significant in more experienced clinicians. Does that mean you can’t unlearn these traps of cognitive reasoning?

NIC SZECKET: This is actually something that’s being discussed fairly regularly by all the people that do research in this area. One camp is saying it’s worth knowing about these traps and learning about them and trying to protect yourself from them and there’s another camp that’s saying that’s not going to work.

And one of the points that we made recently was that it was worth knowing about them, maybe not to try and avoid yourself falling into the traps, but in order to identify them in your colleagues and peers and if you truly have a collaborative model of practise where you can speak to your peers without the fear of criticism, then knowing about these traps can serve in a group practise.

MIC CAVAZZINI: A cognitive bias is an irrational or illogical blip in reasoning. There are names for about a hundred different biases, and here we’ll talk about a few of the most important.

For example, premature closure occurs when a clinician locks down on an attractive diagnosis before all the evidence is in, and before alternative diagnoses are excluded. This is frighteningly common. In one study where junior doctors were presented with written case studies, 53 out of the 58 subjects made this cognitive error in at least one of the three scenarios. Premature closure is often discussed alongside search satisficing.

NIC SZECKET: Search satisficing specifically refers to stopping to look for something else because you’ve already found something that explains the presentation. So, someone comes in with the classic that we love—someone comes in with delirium or confusion. You find that the urine is infected and so you stop looking for other causes of confusion, even though the urine being infected could have been happening at the same time as something else.

ART NAHILL: And you’ve missed the subdural haematoma or something because you stopped at the urinary tract infection.

MIC CAVAZZINI: Yeah, which to unpack the etymology, that diagnosis suffices to explain the test result and your cursory glance of this presentation, the delirium, and maybe you’re satisfied that you’ve made the right call, but….

NIC SZECKET: Yeah, and the fact that you’re satisfied doesn’t actually mean that you’ve satisfied all the explanations because often when you do go back and look at those cases, you often could ask the question, ‘But then how do you explain the unilateral weakness?’ for example. And so, often in these cases if you go back and you find out if they have in fact satisfied the presentation, often there is something that’s not actually explained well enough by whatever they’ve chosen.

ART NAHILL: Or you’ve stopped searching prematurely. So, as an example, there was a study done a number of years ago looking at radiologists and the diagnoses that they made and they actually had X-rays that they showed to a number of radiologists which had both a large lung mass and a comminuted fracture of the clavicle and I think, if I remember the study correctly, up to a third of trained radiologists, didn’t pick up on the comminuted fracture of the clavicle because they saw the big lung mass and cognitively moved onto the next problem.

MIC CAVAZZINI: Now Pat Croskerry, who’s the director of the Critical Thinking Program at Dalhousie University, Pat Croskerry uses the term ‘cognitive forcing strategies’ for techniques that snap us out of that System 1 thinking. The sort of questions you ask yourself to slow down the decision-making process and consider the steps more rationally. Is this the kind of process that you go through?

ART NAHILL: Well, we certainly emphasise that when we’re teaching juniors and we try to model it as well because I typically have a quick set of questions that I try to ask and I specifically ask them when the case seems like a slam-dunk. So that’s when I find that it’s most important to ask these questions, not with the really difficult ones where you’re sort of struggling and it doesn’t seem to fit anything in particular because—

NIC SZECKET: —in those cases you’re already—

ART NAHILL: —in System 2. You’re already there. So, try to remember always to ask the questions, ‘What else could it be? What’s the worst thing it could be? What do I not want to miss? Is there anything about this that doesn’t fit?’

And I think that’s really a key question to ask: are there any bits of the presentation or the investigations that don’t fit? So, we frequently, for example, will see somebody coming in with quote/unquote, ‘pyelonephritis,’ but their urine is actually not that significant, but yet they are treated as a pyelonephritis.

MIC CAVAZZINI: So we’ve talked about premature closure. A similar error is known as the anchoring heuristic. Nic can you explain this one?

NIC SZECKET: Anchoring is slightly different in that, for whatever reason, you’ve decided on a certain diagnosis and you just become attached to it and enamoured with it and you don’t want to let it go, it’s like your baby project so you want to hang onto it. So, it doesn’t necessarily mean that you made the wrong decision, it doesn’t imply what went into making that wrong decision, if it is wrong, but you hang onto it because it’s yours.

MIC CAVAZZINI: Yeah and going back to the processing models—selecting the short list of illness scripts is said to be the most critical stage. Simulation studies in the 1980s reveal that if the correct diagnosis wasn’t contained in the initial list of, say, five or six scripts, there was a large change the right scripts was never going to come up. In fact, clinicians presented with a missed diagnosis will often say, “I just didn’t think of it.” Is that because you’re more likely to hover around the incorrect scripts you’ve pulled up, trying to make them fit, rather than going back to scratch and admitting that you missed some important distinguishing feature?

NIC SZECKET: I was going to say, that sounds right—you get anchored. You know it’s one of the biases that we recognise and anchoring happens within your own mind, so when you think of the three or four most common things you’re not going to go beyond that most of the time, and if you relay that information in any way through documentation, through handover, through whatever it is, to your colleagues, they’re also not going to have other things come to mind. We’re not doing an exhaustive analysis of the case.

MIC CAVAZZINI: And that’s also referred to as diagnostic momentum through the system, or the framing effect.

NIC SZECKET: Yeah, so diagnostic momentum I see as a continuation of anchoring, the difference being that the diagnosis gets anchored from person to person. Framing isn’t just purely about handing over a diagnosis and expecting every other team subsequently to continue with that same diagnosis. Framing is the information that you emphasise when you’re handing over patients, what you decide to even just mention will affect the way that subsequently people think about that diagnosis. The way that you write a note, and that’s going to affect the diagnoses that are elicited in other people’s minds.

ART NAHILL: One of our juniors did a recent study on the framing effect and he sent out surveys to emergency room physicians and general physicians, both registrars or residents and consultants or attendings, and in it there were essentially two different scenarios. And one of the scenarios was framed very suggestively for pulmonary embolus, the other one had exactly the same clinical information in it but was not framed suggestively, and the results were pretty dramatic.

So, the survey asked people to list the top three things on the differential diagnosis and what they would do to investigate it. In the scenario with the framed pulmonary embolus, I think almost all respondents had pulmonary embolus on their differential diagnosis, whereas only, I think, a third in the non-suggested scenario had pulmonary embolus even anywhere on their differential. The second disease was interstitial lung disease. Again, one framed suggestively, one with the same clinical information framed non-suggestively and the drop-off was remarkable.

MIC CAVAZZINI: Wow. And I could imagine if you’ve asked a radiologist—you might request some scans to examine a patient’s back pain and then the radiologist knowing that you’re looking for back pain might see the same usual undistinguishing features but read in more closely and say, yeah, they’re could be a ruptured disc there or some hypertrophy. It’s almost…

NIC SZECKET: But that’s a difficult tension because you want to avoid ordering, say, a CT scan of an area of the body and say, ‘I’m not going to tell you anything, just tell me what you see, tell me what you see.’ That’s very difficult. There’s so much information there and you do want to give radiologists a bit of a focus but I think the best we can do is say, ‘The patient has symptoms in this area, can you tell me what’s wrong with that area or what you see in that area.’

MIC CAVAZZINI: Cognitive errors are often laced with emotion and ego. Confirmation bias, for example, is the unconscious process of acknowledging only that evidence which supports the preferred working hypothesis. Nic Szecket says that everyone is susceptible to this.

NIC SZECKET: Yeah, that rings true. I see myself falling into this when I’ve got a diagnosis in mind that I know that to do with and it would be easier for me to deal with and manage—I really want it to be that diagnosis. And I see myself over-emphasising or under-emphasising clinical features to try and fit. So, yeah, you’ve got to stay away from that.

MIC CAVAZZINI: That’s a good example, yeah. And I could imagine a physician getting some bloods back for a patient and they’re kind of on the edge of the range, but you’re looking for a given diagnosis so you’re sort of self-fulfilling, you go, ‘Oh yeah, they’re just over, that’s probably…’ Is that an example of confirmation bias as well?

NIC SZECKET: I think it’s a form of confirmation bias. Blood work that’s sort of on the borderline, those are ones where we oftentimes don’t quite know what to do with it and so they’re quite easy to ignore if we want the diagnosis to be X as opposed to Y.

MIC CAVAZZINI: One cognitive forcing strategy that can snap you out of the confirmation bias is called ‘consider the opposite.’ So, deliberately seek evidence to refute your hypothesis rather than to prove it. Is this something you can do sort of seamlessly in that process, Art?

ART NAHILL: Well that gets back, if I can just say for a minute, to a scientific philosopher named Karl Popper who discussed this issue of the black swan. So, if you have a hypothesis that all swans are white and all you do is go out looking for white swans, you can never actually prove your theory true. You actually have to go out looking for the black swans in particular in order to try to prove your theory correct or incorrect.

NIC SZECKET: And we put this into practise on a daily basis. It comes back to the classic, pyelonephritis. The first thing—because I become immediately suspicious when I hear certain diagnoses—pyelonephritis is one of them, gastroenteritis—

ART NAHILL: —bilateral cellulitis—

NIC SZECKET: —bilateral cellulitis for sure. And the first thing that comes into my mind, and I think you operate the same way Art, is to say, ‘My job this morning is to disprove this theory.’

ART NAHILL: We don’t say that to the individual, out loud.

MIC CAVAZZINI: To the registrar or whatever?

NIC SZECKET: No, yeah, we don’t want to put anyone off. But in my mind, I’m thinking ‘My job this morning is to disprove this.’ And if I can’t do that then it means I’m satisfied with what they’ve come up with.

MIC CAVAZZINI: And so it doesn’t necessarily take any more work to go through that?

NIC SZECKET: No, I would say it does take more work.

ART NAHILL: Yeah it takes a bit more cognitive work. It does, for sure.

NIC SZECKET: Because the easiest thing to do would be to just say, ‘Pyelonephritis, antibiotics. I don’t have to make any decisions here, let’s move on. Who’s the next patient?’ It definitely takes more work.

MIC CAVAZZINI: And your emotional reaction towards a patient can affect how you care for them. We talk about affective or visceral bias. Can you give some examples of this that you’ve seen?

ART NAHILL: The common patients that elicit or are at risk for eliciting that kind of bias are the patients who are alcoholic, obese, chronic pain, patients with mental health issues and I think one of the reasons, for example, that patients with mental health issues have a much higher mortality and morbidity is that we don’t like dealing with them and so we don’t take what they say seriously. We put everything down to mental illness and we don’t like not being able to make diagnoses.

So, the frequent flyer that has been in before, we also have what’s called a yin-yang heuristic whereby we think, ‘Oh well, they’ve been in so many times before, they’ve been worked up the yin-yang, what can I possibly offer in this case? I’m just going to move on.’ ‘Somebody else has tried this, I’m going to move on.’ So those patients oftentimes don’t get the same cognitive work that patients who aren’t frequent flyers get.

NIC SZECKET: One example that we spoke about recently was a patient who was admitted for the fifteenth time with complications of alcohol intoxication and longstanding alcoholism and one thing that I asked the team to do after we rounded on that patient was to include in the summary of that patient in the progress notes every day, to include what that patient did for a living before they fell into trouble. And I think everyone agreed that it changed the way in which that patient was viewed immediately.

MIC CAVAZZINI: Some cognitive errors come from a misplaced gut instinct. Regular exposure to a certain condition, or particularly memorable case can make it easier to bring a given illness script to mind. The danger is that this might not match with the relative prevalence of the condition compared to other diagnoses, a reasoning error we call base-rate neglect.

ART NAHILL: Base rate neglect is when people don’t take into account the natural rate of occurrence of a particular thing, and so they may tend to under-estimate its importance or over-estimate its importance.

NIC SZECKET: Maybe where it becomes more important is for rare conditions. As an example, if someone presents with palpitations and headache, you might be tempted to think about something like pheochromocytoma, extremely rare, in fact out of every 800 in the States, out of every 800 people who are investigated with tests for pheochromocytoma, one diagnosis is made. There are many other much more common causes for palpitations and headache and so it could be an arrhythmia, it could be anxiety, it could be dehydration. So that would be an example where base rate neglect is going to lead you down a path that you don’t need to go down.

MIC CAVAZZINI: And you might be led down that path by experience of an emotional and a dramatic event. Emotion is a strong driver for forming memories, everyone remembers where they were on 9/11 but not where they left their car keys. And Gurpreet Dhaliwal gives the example of the patient who presents with a splitting headache, you make the diagnosis that it’s a migraine but it turns out to be a brain tumour.

So this, of course, will be a shocking and a memorable event in your career. He says if you started ordering a head CT in every patient with a headache you’d, of course, be accused of base rate neglect and over-testing and the aphorism that sort of reminds us against that is, ‘When you hear hoof beats, think horses—not zebras.’

ART NAHILL: Except if you’re practising in Africa. That’s what gets to base rate, right? So, if you’re practising in Africa, then zebras are more common than horses and so you have to know that and you have to know where you’re practising and what the locality is and the prevalence of various things in your locality, which is sometimes difficult, often difficult.

NIC SZECKET: But I can already see a Facebook post from someone correcting—someone’s going to say, ‘Actually, Art, there are more horses in Africa, but there are many more zebras than there are in North America.’

MIC CAVAZZINI: ‘The availability of David Attenborough documentaries has made you think…’ Yeah, so that little aphorism fits with the fact we mentioned earlier that most misdiagnosis is due to atypical presentation of common conditions rather than rare ones. I find it confusing then when we use the term, the availability heuristic. That can also refer to availability because of prevalence and so the hypothesis that’s easily brought to mind is the one you see every day.

ART NAHILL: So, it doesn’t just have to be because it’s something you see frequently, but it can be availability bias because it just comes to mind for a variety of reasons.

NIC SZECKET: It just has to be more available in your mind. People see a cool factor of diagnosing something rare like pheochromocytoma, so all of a sudden they want it to be that when there are much more common explanations.

ART NAHILL: But occasionally they are diagnosed and that’s—

NIC SZECKET: —yes, that’s the difficulty.

MIC CAVAZZINI: Crosskerry describes a cognitive forcing strategy called ROWS, for ‘rule out worst-case scenario.’ When would you employ that approach and when are you over-testing? In the headache example described earlier, you’re not going to order a CT on every patient, are you?

ART NAHILL: If this patient, for example, is 70 years old and has never had a migraine, well then I’m loath to make the diagnosis of it being a migraine headache. If somebody has had a long history of migraines and they say, ‘Yeah, this feels a bit like my migraine,’ and I go through my ‘What’s the worst it can be, is there anything here that doesn’t fit?’ and I’m satisfied that I don’t need to do anything else, well then I will go with migraine. I don’t think, if you look at my data, I don’t think I utilise CTs or labs any more than anyone else in our department so I think, despite going through this process, I don’t think it necessarily leads to higher utilisation.

MIC CAVAZZINI: Dhaliwal compares this to a football coach who makes a high risk play and, depending on the outcome, it will be variously described as heroic and strategic or, alternatively, short-sighted and reckless. Without hindsight, can you tell the difference between a brilliant diagnostician and someone who’s being paranoid?

NIC SZECKET: People that come to mind when you say brilliant diagnostician are famous people like William Osler, or I dare put Gurpreet Dhaliwal in the same sentence. He has so many illness scripts. He has such an amazing associative memory but for the rest of us mortals, brilliant would mean taking the time and the care to consider all the other diagnoses and having a rational approach to reaching the final diagnosis.

ART NAHILL: We interviewed Larry Weed, who is a pretty prominent figure in medicine, or was until his recent death, but he often at times said that being a physician is not about brilliance, it’s about how you manage data, and I think that there’s a lot of truth to that. So the brilliant diagnostician, I think is often times a very confident one, so who comes in, makes a diagnosis, drops the mic, walks out—but are they brilliant, I don’t know. The other thing is we shouldn’t make diagnoses about heroism; it should be about reproducible, accurate, reliable information.

MIC CAVAZZINI: Thanks to Art Nahill and Nic Szecket for joining this episode of Pomegranate Health. The views expressed are their own, and may not represent those of the Royal Australasian College of Physicians.

There are another 15 minutes of outtakes from this interview at our website, including a case study to test your ability to pick heuristics in diagnostic reasoning. This comes from an online Qstream workshop that the College is running from 18 April; registrations open a month beforehand. At the web address racp.edu.au/pomcast, you’ll also find articles mentioned in this story and a few recommended podcasts. IMReasoning has interviews with Gupreet Dhaliwal and Larry Weed and lots more challenging case studies. At the Canadian podcast Emergency Medicine Cases there are also some great discussions about decision-making and debiasing.

Please add your comments to the website to continue this discussion, and subscribe to the Pomegranate Health using any podcasting app.  The second episode of this story will be released in February, and deals with systems problems that contribute to diagnostic error.

I’m Mic Cavazzini. Thanks for listening.

Comments

Be the first to comment on this Podcast!

Thank you for posting your comments

03 Nov 2024
Close overlay