Ep34: Diagnostic Error Part 2—Systems

Ep34: Diagnostic Error Part 2—Systems
Date:
19 March 2018
Category:

In Episode 32 of Pomegranate Health, we discussed cognitive error in diagnostic reasoning. On this episode, we take a look at systems pressures that increase the likelihood of medical error, crystallised by the recent prosecution of NHS paediatrician Dr Hadiza Bawa-Garba. Almost half of diagnostic errors are due to a combination of systems errors and individual cognitive error. Obvious systems effects come into play in understaffed acute care units; if a clinician is forced to see too many patients without enough time to make careful examinations or reasoned decisions, errors become more likely. The stepping stones of ordering, receiving and reviewing diagnostic tests and scans also allow much opportunity for error and delay. Guests on this episode discuss mechanisms to improve efficiency.

Obvious systems effects come into play in understaffed acute care units. If a clinician is forced to see too many patients without enough time to make careful examinations or reasoned decisions, errors become more likely. And of course, long hours and fatigue will only reduce cognitive capacity. Hospital systems also include the stepping stones of ordering, receiving and reviewing diagnostic tests and scans. Missteps and delays in this cascade contribute to a large proportion of diagnostic errors. Guests on this episode discuss mechanisms to improve efficiency.

Another important step in improving health systems is capturing and reporting error rates accurately. If clinical error is wrapped in culture of blame and punishment, it will make such disclosure more difficult. This concern has been raised in response to the recent prosecution of U.K. National Health Service (NHS) paediatrician Dr Hadiza Bawa-Garba, who had her licence to practice medicine revoked for her role in the death of a young patient. Six-year old Jack Adcock died on a chaotic day in 2011 at the Leicester Royal Infirmary that involved delays in the diagnosis and treatment of his sepsis. Today’s episode examines how widespread systems errors contributed to such mistakes.

Claim CPD credits via MyCPD for listening and using resources related to this episode:

Credits

Guests
Professor Jeffrey Braithwaite FAIM, FACHSM, FAHMS, FFPH-RCP, FAcSS, Hon FRACMA (Australian Institute for Health Innovation, Macquarie University)
Associate Professor Ian Scott FRACP (Director, Department of Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, University of Queensland)
Associate Professor David Heslop FRACGP (University of New South Wales).

Production
Written and produced by Mic Cavazzini. Additional audio recording from James Milson and Jennifer Leake. Music courtesy of Kai Engel ('Memories'), Jahzarr ('Become Death'), Sergey Cheremisinov ('Now You Are Here') and Loch Lomond ('Violins and Tea'). Image courtesy of Max Pixel. Executive producer Anne Fredrickson.
Editorial feedback for this episode was provided by RACP Fellows Paul Jauncey, Phillipa Wormald, Katrina Gibson, Rosalynn Pszczola, Andrea Knox, Philip Gaughwin, Rhiannon Mellor and Richard Doherty.

References

***Update
In August 2018 the Court of Appeal overturned the ruling against Dr Bawa-Garba and April 2019 she was permitted to return to practice as a paediatric trainee.

Cited in this Episode
Improving Diagnosis in Health Care [National Academic Press]
Best Care at Lower Cost: The Path to Continuously Learning Health Care in America [NASME Institute of Medicine]
Diagnostic Error in Internal Medicine [Graber, Arch Int Med]
Resilient Health Care: Turning Patient Safety on Its Head [Braithwaite, Int J Qual Health Care]
Clinical Decision-Making Tools: How Effective Are They in Improving the Quality of Health Care? [Trevana, Bond University]
Patient Notification and Follow-up of Abnormal Test Results [Boohaker, JAMA]
Diagnostic Error in Medicine: Analysis of 583 Physician-Reported Errors [Schiff, JAMA]

Podcasts
Tackling Drivers That Lead to Diagnostic Error [IM Reasoning]
Who or What Should Be Blamed When a Medical Tragedy Occurs? [Health Report, ABC]
Can We Make a Better Handover? Can I-PASS the Test? [KeyLIME]
Diagnostic Decision Making in Emergency Medicine [Emergency Medicine Cases]

Other Literature
Fumbled Handoffs: One Dropped Ball After Another [Gandhi, Ann Intern Med]
Differential Diagnosis: The Key to Reducing Diagnosis Error, Measuring Diagnosis and a Mechanism to Reduce Healthcare Costs [Maude, Diagnosis]
Predictors of the Effectiveness of Accreditation on Hospital Performance: A Nationwide Stepped-Wedge Study [Braithwaite, Int J Qual Health Care]
System Related Interventions to Reduce Diagnostic Error: A Narrative Review [Singh, BMJ]
Achieving Quality in Clinical Decision Making: Cognitive Strategies and Detection of Bias [Croskerry, Academic Emergency Medicine]
Spreading Human Factors Expertise in Healthcare: Untangling the Knots in People and Systems [Catchpole, BMJ Health and Safety]
Evaluating Online Diagnostic Decision Support Tools for the Clinical Setting [Pryor, Stud Health Technol Inform]
Misdiagnosis at a University Hospital in 4 Medical Eras Report on 400 Cases [Kirch, Medicine]
Coping with Errors [Don’t Forget the Bubbles]
Health Care Spending in the United States and Other High-Income Countries [Commonwealth Fund]

Case of Dr Hadiza Bawa-Garba
GMC v. Dr Bawa-Garba 2018 Judgement [England and Wales High Court]
Dr Bawa-Garba: Who's to Blame When a Medical Tragedy Occurs? [ABC]
Supporting junior doctors: The sad saga of Dr Bawa‐Garba [JCPH]
The Hadiza Bawa-Garba Case Is a Watershed for Patient Safety [BMJ Opinion]
Back to Blame: The Bawa-Garba Case and the Patient Safety Agenda [BMJ News]
An Account by Concerned U.K. Paediatric Consultants [54000 Doctors]
To Err is Homicide in Britain – The Case of Dr Hadiza Bawa-Garba [Jha, Medscape]
If Hadiza Bawa-Garba Worked in the U.S., She Would Still Be a Doctor [Jha, The Guardian]

Transcript

MIC CAVAZZINI: Welcome to Pomegranate Health. I’m Mic Cavazzini.

In January this year, U.K. paediatrician Dr Hadiza Bawa-Garba had her licence to practice medicine revoked for her role in the death of a patient. Six-year old Jack Adcock died of septic shock on a chaotic day in the Leicester Royal Infirmary that involved delays in his diagnosis and treatment.

We already discussed cognitive error in the diagnostic process in episode 32. The case of Dr Bawa-Garba highlights the systems pressures that can exacerbate medical error. Today we’ll talk to three experts in health systems about problems in resourcing, diagnostic tests and scans, the ergonomics of health IT and the culture of disclosing error. Here’s Professor Jeffrey Braithwaite of Macquarie University.

JEFFREY BRAITHWAITE: I’m Jeffrey Braithwaite. I run the Australian Institute of Health Innovation and that’s a large research group of about 150 people doing research on patient safety, e-health, how care is organised. You know, no one has ever designed a system as complex as health care. I don’t care whether you’re talking about manufacturing or banking or the military. So, the question is, how do we understand that cognitive load of that individual amongst all the milieu of complexity and intricacy and difficulty?

MIC CAVAZZINI: On the 18th February 2011, Dr Bawa-Garba was the on-call paediatric registrar at Leicester Royal Infirmary, in her sixth year of specialty training after studying medicine in the same city. She had recently returned from 14 months of maternity leave and had not yet been inducted into the hospital’s workings. The account of that day presented here is collated from court documents, the BMJ and many other sources. For brevity, these are all linked to the transcript of this podcast found on the website.

The Children’s Assessment Unit at Leicester Royal Infirmary normally handles 15 acute admissions coming from either the emergency department or from GP referrals. The registrar supposed to be covering the CAU was absent on that day, and the consultant was teaching in a neighbouring town.

Therefore, Dr Bawa-Garba was left in charge of the CAU and of paediatric emergency. This meant she was overseeing surgical admissions, giving advice to midwives and taking calls from GPs as well as supervising several wards spread over 4 floors. She’d even missed the morning handover because she had to attend a cardiac arrest and many of the nursing staff were from an agency, due to a shortage of permanent staff.

Speaking generally, staffing and training are the most valuable investment into patient safety, says Associate Professor Ian Scott of University of Queensland. He is Director of Internal Medicine and Epidemiology at Princess Alexandra Hospital in Brisbane.

IAN SCOTT: If you want to make care safer there are some areas that clearly need more resources. That is, you need more staff and those staff also need to be highly trained and there are some areas that could do with a fair injection of funds and extra funds that aren’t quite so sexy. So, I’m talking about older, frail patients who have care needs that don’t just involve giving them a drug or doing intervention and don’t need some bells and whistles, new scientific machine to improve their care. What they really need is well-trained nursing staff, perhaps in residential care sectors who can provide the nuanced care that they need to keep them out of hospital and to make them as functional as they possible can be.

MIC CAVAZZINI: Well, actually that reminds me of a systematic review that you wrote about the common misdiagnoses in older patients, and you wrote that dementia tends to go under-diagnosed and was of course associated with socioeconomic factors of the patient and poor access to care. But one of the factors leading to under-diagnosis of Parkinson’s disease, for example, was living in a nursing home. What’s going on there? Perhaps at home the family that knows the person so well can also detect when there’s something not quite right, whereas the nursing staff are obviously flat out and perhaps not as well acquainted with—

IAN SCOTT: —yes, well that’s true, and I think not an insignificant number of transfers are made at the behest of relatives who can see a definite change. I think we are now more aware of dementia, delirium, confusional states— certainly in a hospital setting. I think in residential aged care they still have some way to go and it’s simply because the level of training and the level of support hasn’t been as good as it could be.

We’re trying to address that right now in our hospital by having an outreach service to nursing homes where we have offered them training packages, algorithms to handle common problems, ready access to senior decision support from our emergency department and from our senior nursing staff here—paramedics who are trained to do assessments at nursing homes and try to prevent people from being brought into hospital unnecessarily. That, again, takes resources but they have seen the benefit because we’ve actually reduced hospitalisations.

JEFFREY BRAITHWAITE: Let’s think about three dimensions: good, quick and cheap. You can’t have all three. You can have a good quick system but it won’t be cheap. You can have a good cheap system but it won’t be quick. Now what it seems to me, emanating from ministers, officers, taxpayers even, that we want all three. Well there’s trade-offs. So what are we willing to accept. I mean we could have safer care for patients if we spent, I don’t know, three per cent more GDP—bumped it up from, what is it, 9.3 in Australia to say 12.3 per cent.

MIC CAVAZZINI: And for the time being it has been evaluated favourably in what we get, our bang for the buck.

JEFFREY BRAITHWAITE: At the systems level is the Commonwealth Fund in the U.S., which assesses 15 countries on various dimensions—'What’s general practice like, what’s acute care like, what about pathology?’ And so they have different questions each year or so and ask all the health systems and interrogate them on that question and usually Australia is in the top three or four on most dimensions, not every dimension all the time. New Zealand has a bit of an added problem, it’s only four and a half million or so in a population, it doesn’t have to cover the same geographic width as Australia but it doesn’t have quite the resources, and a smaller GDP.

MIC CAVAZZINI: Other workforce issues are pretty obvious, long hours without breaks are going to lead to fatigue and lower cognitive capacity. But interesting to me is the audit culture of measuring the performance of health institutions. One example that’s been pointed out is the ‘four hour rule’ in emergency, which requires that 98% of patients arriving at emergency are to be admitted, discharged or transferred within four hours from the time of triage. Are strategies like this good for productivity or do they add unnecessary pressure?

JEFFREY BRAITHWAITE: So, the obvious answer is yes and no. If you look at the NHS, the National Health Service in the U.K. as an example, and many listeners will have worked in the NHS and know a lot about it. There’s been a culture of key performance indicators, targets, to excess. I think most people would agree, it’s been excessive. I published a paper with a colleague on this a couple of years back. It can lead to people focusing on the targets but they immediately defocus on all the other things that might be important. They’re also often rather artificial, they’re cooked up sometimes in a minister’s office or a policy maker’s office because they want to get the press of their back. Sometimes they have distortion effects. They also get a bit resented by clinicians who know that the world isn’t as precise as that, every patient is different.

MIC CAVAZZINI: Jack Adcock was admitted to hospital on referral after a day and night history of diarrhoea and vomiting. He was a child with Downs Syndrome, who’d previously had surgery for a congenital atrioventricular canal, and was being managed with the ACE inhibitor enalapril.
Dr Bawa-Garba assessed Jack at 10:30am. He was dehydrated, unresponsive, and limp. He was breathing shallowly and but had no raised temperature. Blood gases showed a pH of 7, a base deficit of -14 and a lactate of 11 mmols. The doctor made a presumptive diagnosis of acute gastroenteritis and treated Jack with an intravenous fluid bolus and maintenance fluids.

Jack seemed to perk up and blood gases measurements were repeated around midday. According to Dr Bawa-Garba’s defenders, these showed he was less acidotic and recovering metabolically. In court, the prosecution said that readings were still way off, and that she hadn’t drawn enough blood for accurate measurements to be made. Dr Bawa-Garba did also order a chest radiograph and bloods for renal function and inflammatory markers, suggesting that pneumonia was on her radar.

This differential diagnosis was confirmed at 3pm when Hadiza Bawa-Garba saw the scans. In court, she was blamed for the two-and-a-half-hour delay in reviewing the X-rays. But Dr Bawa-Garba’s hadn’t been told they were available, and during that time had been dealing with an infant requiring a lumbar puncture.

Delays in the cascade of ordering and analysing scans are common, and require an examination of both capacity and culture, says general practitioner David Heslop. He is Associate Professor in the School of Public Health and Community Medicine at UNSW, and his research interest in modelling risk in health services comes from a career as a military medic.

DAVID HESLOP: I guess questions that are raised for me by the case in the U.K. relate to how faults within the system are managed. So, if there are systematic delays within interpretation and it’s known that it’s causing a clinical effect then there are potentially other options that could be explored. Does it have to be an NHS radiologist who does the interpretation, in the event that there’s a surge? Or is there some way to better design the system so that there’s a buffer capacity within that?

MIC CAVAZZINI: So, according to a survey of  over 580 physicians published in JAMA, about ten percent of all diagnostic errors came from delay in ordering certain tests. But equal proportions, so another ten per cent of errors came from delays in getting the tests back, and failure of the ordering clinician to follow up abnormal results. Another review in the Annals of Internal Medicine reported that three quarters of physicians surveyed didn’t routinely notify patients of normal test results, and up to a third of them didn’t even always notify them of abnormal results. Is there a cognitive barrier here that there’s always one more patient at the door, so that once the previous one is ‘out of sight’ they’re in a sense, ‘out of mind’?

DAVID HESLOP: I find those statistics shocking. There is a large component of culture, culture that we acquire through medical school, through our junior years as clinicians, about the way things are done and due diligence. So, we get trained, for example, to take a clinical history, to perform a clinical examination and then tests are used to either rule in or rule out a particular diagnostic hypothesis. And what is highly concerning about results if they’re not actually being viewed or if they’re not being followed up, is that one has to question whether they have been ordered and conducted in the context of a formal diagnostic strategy. So the thing which I think is most concerning, and I guess, leaves me with a sense of disquiet, is that are there unnecessary tests being conducted?

IAN SCOTT: We know that lack of follow up of investigations that have been ordered is a problem, particularly if patients are discharged and the investigation comes back after they’ve been discharged. They are system factors. You have a lot of requests that come back. These are not necessarily urgent. Clearly, you want to tell the patient if they’ve got breast cancer, but it’s not going to make a difference whether that be five minutes or two or three days. So, the thing is that you have tests coming back that are not necessarily time-critical and many of which also will be normal, and you need to have a means of tracking those patients. A lot of practices now have gone to a registry-type process.

I think the advent of the electronic record should, I think, overcome now some of that problem. You can now order investigations more quickly electronically so it doesn’t involve having to wait for a paper form to reach the lab or to reach the radiology department. The result now is digitised, which is then readily accessible on any computer throughout the hospital so you can see those images quickly. In terms of the reports, I think the system response to that is that a radiology report is always appended to that digital image, even if it’s a provisional report done by a registrar or even in some cases a radiographer.

And then there’s a means of alerting the clinician that that digital image is available. Now I think on the latter part we haven’t quite solved that problem in terms of alerting the clinician to say, ‘Your image now has returned and you can now view it’—maybe a pager or some visual or auditory alert on the computer certainly may help people not have a delay in looking at those images when they’re available.

And then make sure that the patient is notified of the result either by simply a text message or some other electronic messaging or by phone contact. Obviously in the case where it may be equivocal or positive, in which case then I think the relevant doctor should be asked then to have a talk to the patient and then have a discussion about what needs to happen subsequently.

MIC CAVAZZINI: There’s another big review in the National Academic Press called Improving Diagnosis in Health Care from a couple of years ago, and it describes this issue of lost tracking, lost follow up between teams of collaborative care. So, say an incidental finding turns up on a back X-ray, some mass shows up on the  X-ray. There might be a diffusion of responsibility as to which department takes ownership of that, whose responsibility it is to follow it up and as long as we make more explicit hand off policies, then that is a low-hanging fruit to eliminate these kind of errors.

IAN SCOTT: Well, that’s another system issue in the sense that if you’ve got multiple specialists or multiple providers involved in the care of a patient, then someone has to take overall responsibility and make sure that the whole picture is being looked at and reviewed. So, the general practitioner in many instances. If the patient is in hospital or is frequently seeing a lot of specialists in a hospital, then perhaps there’s someone who has a generalist training or generalist outlook, maintaining an overview of the patient. In our hospital here, the practise is that if a radiologist or a pathologist sees a critically abnormal value or image that clearly may bear on the urgency of treatment, then the relevant team is phoned.

MIC CAVAZZINI: Jack Adcock’s sepsis had been caused by a Group A streptococcal infection. Dr Bawa-Garba prescribed antibiotics as soon as she saw the X-rays, but was a delay of another hour before these were administered by nursing staff, around 4pm.

At the same time, the IT system in the hospital had failed, meaning that test results weren’t automatically copied into the electronic health record. The Senior House Officer was taken off clinical duties to communicate test results over the phone.

This is how Jack’s blood results were communicated to Dr Bawa-Garba some five hours after she’d ordered them. She scribbled down all the figures and noted a raised C-reactive protein, but didn’t pick up on elevated creatinine and urea levels, which would indicate kidney dysfunction. Even when the IT came back online, the alerting system did not flag up abnormal results as it normally would have.
There’s no doubt IT systems could streamline the way critical patient information is communicated round the hospital and broader health service. But poor design of displays and control panels can also add to the cognitive demands on a clinician.

A 2015 assessment by the U.S. National Academy of Sciences Medicine and Engineering found that many electronic health record systems had cluttered displays, and there was inconsistency in the scales of measurement between programs, or the chronology of presentation of test results. Another study of IT tools used by emergency clinicians showed that it took 15 clicks to provides a prescription, and 40 clicks to document the exam of an injury.  

Ian Scott and David Heslop reflect on the ergonomics of health IT systems, and how these supports can fit in with day-to-day clinical practice.

IAN SCOTT: Well, it all comes down to design, so I think in the U.S. there’s been a lot of off-the-shelf commercial products that are simply being imposed on hospitals without the local clinicians being any way involved in their design or testing. I think we’ve learnt in digitising hospitals here in Queensland that things are prioritised in ways that follow an intuitive line of thinking that most clinicians will use.

In terms of the ergonomics, the computers are all mounted on mobile platforms so we carry them around on ward rounds so we’re right there at the patient’s bedside with the computer, which has a number of advantages. One, it means that you are seeing that patient and you only have one record open and that’s the patient in front of you so there’s no problems about putting entries into the wrong chart. The second thing is that it’s done in real time, whereas in the old days the resident would wait until the four-hour ward round was finished and then start filling out the paper forms and doing the requests.

The third advantage is you then have access to imaging and other results on the screen, which I turn the machine around and then show the patient and it involves them as well. And it can certainly be of benefit in terms of increasing their understanding of what’s going on and sometimes how serious their illness is and what we need to do about it, not only us has health care providers but what they need to do in terms of lifestyle change.

DAVID HESLOP: So, I think that there are a whole lot of risks that emerge from providing more information for the sake of providing information and there are recommendations and standards associated to how much information can actually be put onto a two-dimensional surface and expect somebody to be able to absorb it. And that comes predominantly through the aviation industry, actually, and through heads-up displays for example. But equally, the old paper records had a huge amount of information within them. The risk is that you miss the relevant piece of information. So, the value of the automated system is defined ability, search ability functions.

MIC CAVAZZINI: That brings up the issue of alerting systems. As you say, if there are abnormal results returned to the health record and you get an alert—there are actually now reports of ‘alert fatigue,’ that there are too many alerts from too many different devices. Again in that NASME review, 70 per cent of clinicians surveyed thought they received more alerts than they could effectively manage, and 30 per cent said they had actually missed alerts in practice that had resulted in patient care delays.

DAVID HESLOP: That’s an interesting thing. So, in emergency situations, in critical care, we’re all familiar with the oxygenation tones but if there is something more salient in your environment that is occurring, an important conversation or a significant moment with another patient for example, then those kind of alerts can actually go into the background and be lost. How many interruptions can a clinician actually deal with before we really start to see some degradations in performance, and what’s reasonable as well? I have yet to see clear evaluation of that particular question.

MIC CAVAZZINI: And what about clinical decision support tools? Those computer platforms where you type in the symptoms and they turn out a bunch of differentials and not-to-miss diagnoses. Are they something that can be slotted seamlessly into the diagnostic process?

DAVID HESLOP: I personally take an approach that they are sometimes useful where I may have doubt.

and so again I’ll draw on a military context. In hazardous materials management, if you are dealing with clinicians who don’t have experience with a very unusual context, then they can be extraordinarily helpful in providing a probability diagnosis. And there are a number of tools that maybe of interest to listeners such as a tool called WISER, from the National Library of Medicine in the United States, which has a syndromic or symptom-based diagnostic support tool which is incredibly useful in the non-toxicologist clinician.

MIC CAVAZZINI: There was a 2014 systematic review from Bond University that concluded while these computerised clinical decision tools have been shown to be helpful in teaching and in controlled settings, there is still no evidence to show how they fit within a clinician’s workflow and chaotic real-world environments to actually influence day-to-day practice in a positive way.

DAVID HESLOP: We all practise in subtly different ways and for some a clinical decision tool will fit very well with how a clinician might actually naturally do their practice. They can become very intrusive where they become an acknowledgment requirement almost like a gate-keeping mechanism if you, for example, have not considered this but you’re clicking on this and saying, ‘Yes, I have in fact I have considered this,’ because you were in a rush. It can be quite intrusive to the art of medicine I think. The problem is that garbage in is garbage out and so if you were on the wrong track to begin with then sometimes getting off that track is going to be even harder.

MIC CAVZZINI: At 4:30pm, Dr Bawa-Garba met with the duty consultant Stephen O’Riordan, who had just returned from his teaching obligations. She told him that Jack had a high CRP reading and pneumonia, but seemed to be on the mend. Dr O’Riordan would say in court that he didn’t review the patient himself because Dr Bawa-Garba hadn’t ‘stressed’ any urgency or alarming findings to him.

Unknown to Dr Bawa-Garba, Jack’s diarrhoea had returned and his temperature had risen after he was moved from the CAU to a ward. Nurse Isabel Amaro was later prosecuted for failing to observe Jack adequately and for turning off his oxygen monitoring equipment without informing the doctor. Hadiza Bawa-Garba was also unaware that at around 7pm, Jack was given his usual dose of enalapril. It was administered by Jack’s mother after she’d checked with a ward nurse. Enalapril is contraindicated in shock because it lowers blood pressure further. It wasn’t on Jack’s drug charts, as Dr Bawa-Garba meant for it to be withheld. In court it was said she’d been careless by failing to highlight this more prominently to other staff.
45 minutes after the drug was given, Jack suffered a cardiac arrest. Dr Bawa-Garba was called to the bedside and resuscitation was begun. At one point, she stopped these efforts for a minute, after mistaking Jack with another patient who’d been marked not for resuscitation. This was 13 hours into a double shift, with no break even for food.

At 9:20pm, Jack passed away. A court would later hear that he was already too far gone for this hiatus in resuscitation to have affected the outcome.

Dr Bawa-Garba was, of course, traumatised by the event, and some days later, Dr O’Riordan suggested she reflect on the mistakes of the day in writing. She wrote about her interpretation of the blood measures and about breakdowns in communication. These notes were used in the  hospital’s investigation of the incident, but contrary to many media accounts, not directly in court proceedings. In November 2015, Hadiza Bawa-Garba and Isabel Amaro were convicted of gross negligence manslaughter in a jury trial. The Medical Practitioners’ Tribunal suspended Bawa-Garba’s licence to practice for 12 months.

The General Medical Council then appealed to the High Court, insisting that public confidence in the profession could not be maintained by the application of such a lenient penalty. In January of this year, Justice Duncan Ousely ruled that Dr Bawa-Garba should be permanently erased from the medical register.

In Australia and New Zealand, manslaughter convictions against medical professionals are very rare. Medicolegal commentators quoted by the ABC and the NZ Resident Doctor’s Association point to the greater recognition of systems pressures here, and of the need for transparency. But they also say this culture can’t be taken for granted. Jeffery Braithwaite and Ian Scott discuss how the Bawa-Garba case could influence the disclosure and examination of medical error.

JEFFREY BRAITHWAITE: Even the Secretary of State for Health, Jeremy Hunt, about the Bawa-Garba case tweeted and said, ‘This is a big problem for learning because they captured her notes.’ It’s a very evocative case to remind us all, and most doctors I’m talking to are saying, ‘There but for the grace of God go I. That could have happened in my career.’

MIC CAVAZZINI: Professor Mark Graber, who is a bit of an institution in this field, he  studied 100 cases of diagnostic error involving physicians and almost half the diagnostic errors were down to a combination of systems errors and individual cognitive error. And some of the discussion in the field of diagnostic reasoning suggests that it’s probably impossible to eliminate heuristics and bias from cognitive reasoning even if you’re aware of them. So, if we accept that errors will occur, that they’re intrinsic, can hospitals or the health system as a whole, absorb that error and then put it to learning opportunities?

JEFFREY BRAITHWAITE: Yeah, look, there’s a couple of ways to go there and I think we haven’t got the balance right here. I mean if there’s nursing staffing problems, the IT system is down, if you’ve just come back from maternity leave and you’re feeling stretched, if you get penalised and punished in that situation, it’s scapegoating the individual for the system’s errors. We ought to fix the system. And there’s great concern that the GMC appealed, and took that up to the High Court. No one can completely understand that.

MIC CAVAZZINI: I mean where is the balance? For example, the BP oil spill in the Gulf of Mexico, BP paid out five billion dollars in fines and was held responsible as a corporate entity. They didn’t go after the guy that laid the cement or didn’t push the button in time. Are we very far from that in terms of the legislation?

JEFFREY BRAITHWAITE: I would like to hope that we have more understanding and tolerance in Australia. I couldn’t guarantee that in every setting. The reaction from some clinicians could be, next time they’re allocated to such a shift they’ll say, ‘I’m sorry, I can’t practice here, it’s not safe’. That would have a pretty interesting effect on the managers and the policy-makers wouldn’t it?

IAN SCOTT: I think how you make sure that these errors or missed diagnoses or near misses are addressed in a collegiate, non-judgmental fashion—in our own department at this hospital, we have regular session called ‘clinical conundrums’ once a week where we present cases that have caused diagnostic uncertainty, where people are made to feel free and not at any risk of being shamed, so we discuss these issues openly. Our registrars and our residents also attend these sessions so they can see that we’re all fallible, that mistakes can be made despite the best of intentions. It’s very rare that these errors actually result in serious injury or patient harm or death, but nevertheless it may have caused a delayed diagnosis, it may have caused people to receive treatment that they perhaps didn’t need for the first few days.

We discuss it, but we don’t actually keep minutes. That would then certainly make people a lot more confident in speaking up and saying, ‘Well, I think we perhaps may have done something wrong here.’

I think the other thing in this too is, of course, more emphasis on patient disclosure to indicate to patients where a mistake or an error has been made and to make sure that they understand what we’ve done to redress the situation and what we are going to do to prevent similar errors occurring in the future.

If we have any critical incident that automatically gets recorded—minutes are kept but they’re privileged, they are not accessible by Freedom of Information Acts. So, I think there has to be a balance between the public’s right to know, but also the ability of clinicians not to incriminate themselves in prosecutory action by actually discussing where errors have been made.

MIC CAVAZZINI: I wanted to come back again to the technology. Will technology save us? I don’t know if you’re familiar with the podcast  IM Reasoning, Art Nahill and Nic Szecket talked about how electronic records can be used to reflect back on a clinician a possible misdiagnosis. So, say a patient you’ve discharged is readmitted two weeks later, you get an alert and you can check up whether the complaint is consistent with your previous diagnosis.

IAN SCOTT: We certainly encourage and write in our discharge summaries to all our general practitioners, ‘If you have any concerns about this patient or you feel that there have been missteps in management then please notify us, please contact us.’ I think also you need to have confidence that you’re not going to put that professional relationship between, say, a GP and a specialist into some sort of doubt.

And we’ve also made access to GPs to discharge summaries and investigations, any imaging, et cetera that’s been done in the public hospitals accessible to them as well by our viewer program. Increasing numbers of hospitals now in Australia are becoming digitised—we have less duplication of tests because we can see what’s been done elsewhere and not doing it again. I think that’s a definite saving for the health care system as well as patient convenience.

JEFFREY BRAITHWAITE: Most people have spent most of their time thinking about and doing patient safety, trying to keep patients safe, by doing what we call ‘Safety 1’—the root cause analysis sort of model. And the idea is, we should find out the ultimate determinants of what went wrong and then we should make some recommendations to fix that so it never happens again. Well problems don’t happen again in exactly the same way anyway, so what have we fixed? And we haven’t been as successful as we would like.

MIC CAVAZZINI: You make the point, I think it was in the International Journal for Quality in Health Care that some activities are tractable by this approach, very procedural things such as central line infection bundles in the intensive care unit and check lists in theatres, but in general there is so much complexity that you can never distinguish the pattern of cause and effect that led up to an erroneous outcome versus the pattern that leads up to a successful outcome. The variables are too small and immeasurable to predict.

JEFFREY BRAITHWAITE: Most of the difficult things we face are about people with expertise interacting with other people in a complex mix where the behaviours aren’t so predictable, the patients themselves are very complex. We’ve conceptualised that, that safety paradigm in a different way in the last five years. The Resilient Health Care model suggests, you know, a lot of care, and surprisingly in a system this complex, so much care goes right. What do we know about that? How can we understand that and how can we replicate it?

Let me give an analogy. Say we want to understand the behaviour of sharks but we only looked at shark attacks, we would only know a small sliver of shark behaviour, the time when they attack people, but the sharks actually swimming in the ocean are not attacking people most of the time. So, to understand shark behaviour, we have to understand the whole behaviour patterns of the shark, not just when they attack humans. So, we’ve said, ‘What do we know systematically through research when things go right, when there’s no harm?’

The other one is that thorny old chestnut that people keep raising and people say, ‘It’s the culture.’ The culture is the way people behave and act without thinking necessarily, it’s just the way they do things around here and the way they think around here and it’s unique. And then if you go across town to another hospital, it might be the same kind of hospital, same structure, roughly the same kind of skilled people—they might both declare that they’re centres of excellence, by the way—but they’ll be markedly different and the difference will be the culture.

So, we’ve done a study and we looked at, where are there studies where people have measured the culture and they’d link that to patient outcomes? This is a systematic review just published in BMJ Open. Almost all the studies uniformly spoke with the same voice. Whenever you’ve got a better culture versus a poorer culture, you’re going to deliver better outcomes, and it doesn’t matter what the outcomes were, they were a mix of things in these studies so we were heartened by that. So if you could promote that better culture—if it’s relatively inclusive, if it treats junior staff well and supervises them reasonably effectively, it if seeks the patient’s perspective, if it engages with other people outside of your own work team area—you’re likely to produce good results for patients. Isn’t that what we’re here for?  

MIC CAVAZZINI: Many thanks to Jeffrey Braithwaite, Ian Scott and David Heslop for contributing to this episode of Pomegranate Health. The views expressed are their own, and may not represent those of the Royal Australasian College of Physicians.

Events reported about the Bawa-Garba case come from court transcripts and many other sources. You’ll find these at our website, racp.edu.au/pomcast, along with other literature mentioned in the podcast and a full transcript. There are also links to podcasts from IMReasoning on the drivers of diagnostic error, and from KeyLIME discussing how to improve handover procedures. The paediatrics blog Don’t Forget the Bubbles also takes a look at coping after an error has occurred.

Editorial advice for this and all episodes were provided by RACP Fellows. You too can leave comments on our website and keep the conversation going.

I’m Mic Cavazzini. I hope to hear from you.

Comments

Be the first to comment on this Podcast!

Thank you for posting your comments

11 Nov 2024
Close overlay