[IMJ On-Air] Making sense of HACs

[IMJ On-Air] Making sense of HACs
Date:
16 November 2022
Category:

Fellows of the College can claim CPD credits for listening and reading supporting resources. Login to MyCPD at this link, review the prefilled activity details and click save.

Clinical complications suffered by patients during hospital stays are assumed to be preventable and to provide some metric of quality of care. To assist in their understanding and mitigation the Australian Commission on Safety and Quality in Healthcare established a national programme to track hospital-acquired complications (HACs) in a formalised way. Comparison data can be found through the Health Roundtable reports and it’s been understood that hospitals with higher complication rates may have a have a lower standard of care.

While the national HAC program has support from providers across all jurisdictions and makes good use of electronic medical records, some questions remain as to its methodology. In a retrospective audit of medical records published in the Internal Medicine Journal, Dr Graeme Duke and colleagues at Eastern Health Intensive Care Research have sought to validate the clinical significance of HACs identified within their service. Their research suggests that HACs are underreported by coding data and that they are more strongly associated with patient-related factors than with deviation from clinical best practice. Dr Duke and IMJ editor Professor Ian Scott discuss the research article and its implications for the national hospital-acquired complications programme.  

Credits

Guests
Dr Graeme Duke FCICM, FANZCA (Eastern Health Intensive Care Services)
Prof Ian Scott FRACP (University of Queensland, Princess Alexandra Hospital)

Production
Written and produced by Mic Cavazzini DPhil. Music licenced from Epidemic Sound includes ‘Reaching for Infinity' by Dawn Dawn Dawn and ‘Nabga Algooah’ by Ebo Krdum. Image by SolStock licenced from Getty Images. Editorial feedback kindly provided by Prof Jeff Szer FRACP.

References

Graeme J Duke et al. Clinical evaluation of the national hospital-acquired complication programme Internal Medicine Journal 2021; 52(11)

Access to IMJ, JPCH and OMJ for RACP members
Hospital Acquired Complications Indicator [ACSQHC]
Benchmarking databases [NSW Health]
Health Roundtable
the Australian Council of Healthcare Standards
REDCap shared library
Why Clinical Coding Matters [Med Ed with Andrew]
Coding Matters: Hospital Acquired Complications [Med Ed with Andrew]

Transcript

MIC CAVAZZINI:               Welcome to IMJ On-Air, a podcast from the Royal Australasian College of Physicians. I’m Mic Cavazzini in the Pomegranate Health studio, but today I’m going hand over the reigns to Professor Ian Scott calling in from Princess Alexandra Hospital in Brisbane where he is Director of Internal Medicine. He is an editor at the Internal Medicine Journal and through the Uni of Queensland has over 230 research publications across many areas. He sits on several committees at College, state and federal level. Ian, do you get much sleep at all?

IAN SCOTT:         I try not to sleep. It gets in the way of things.

MIC CAVAZZINI:               And you’ve been on the podcast a couple of times before, so you’ll have no trouble hosting it. Today you’re zooming in on a research article you reviewed for the November edition of the IMJ about hospital-acquired complications, or HACs. And just to make this clear from the get go, we’re not talking about medical injuries so much as any complications in care, some of which are foreseeable based on presentation. Before introducing the author of that paper, can you tell us what the national HAC programme is, because I suspect many of our listeners might not be plugged into that level of quality and safety evaluation.

IAN SCOTT:         Thanks, Mic. Well, I think that there's been a big focus on trying to improve safety and quality of care now over the last decade or two. The Australian Commission of Quality and Safety and Health Care I think is wanting to make sure that clinicians have some idea of their complication rates in relation to things that may be potentially preventable in hospital care. So the Hospital-Acquired Complications program has been around for some time, in fact some years, and most public hospitals with their Quality and Safety units will analyse this data at a unit or departmental level. Directors of departments might also access it to the Health Roundtable as well, I certainly look at that from my perspective. I think it's a barometer, or an attempted barometer, of the quality and safety of the care that we're that we're giving. And in particular, is there anything we can do better in preventing the hospital-acquired complications.

MIC CAVAZZINI:               So the HAC programme’s list of specified complications is now in its 12th Edition, and it’s been whittled down 16 key outcomes. These include pressure injuries, falls resulting in a fracture, returns to theatre, healthcare-associated infections, renal failure and several others. But according to Dr Graeme Duke’s paper, “Clinical validation of the HAC methodology is …. conspicuous by its absence.” That’s to say we don’t really know how good those data are at quantifying adverse clinical events or what to do with those data.

Graeme is Deputy Director of Eastern Health Intensive Care Services in Melbourne and clinical lead for intensive care research. He’s an educator with a pet interest in ventilation and data analytics. Graeme, can you tell use more about yourself and how you became interested in this question around the National Hospital-Acquired Complication Programme.

GRAEME DUKE: Thanks, Mic. And thanks, Ian, for inviting me onto this podcast. Really appreciate it, and I hope this will be valuable for the listeners. Can I just start by doing the important thing of just recognizing the traditional owners of the land on which I'm sitting. It's the Wurundjeri people of the Kulin nation, and I pay my respects to their elders past, present and emerging, and also thank them for the hospitality to be able to present this information to you.

So what am I? As you describe, primarily, I'm a clinician. I've got an interest in research, particularly data analytics. Because I'm an intensive care specialist, in fact, I see quite a lot of complications in patients, including my own. And so this topic is something that's both of personal interest, of interest to the health service in which I work. And as in all jurisdictions across the nation of particular interest to any healthcare stakeholder. And how did we get into it? We knew about the National Hospital-Acquired Complications Program, and we thought, “Well, let's hop on board.” And so we hopped into that to try and work out what was going on at the coalface in the hospitals that I work in. The network that I am engaged in has three major acute hospitals and several other sub-acute sites. So we were really curious to know how we were performing historically against our own benchmark but also against others. So that's how we got into it.

So it's important for everyone to acknowledge the fact that hospital-associated complications occur everywhere and in every service in every hospital and it's imperative that we reduce those complications. And everyone agrees that the best way to reduce those complications is to identify what they are, where they occur and why they occur. And what we've identified when we initiated our program is that the HAC methodology is agnostic to whether it's due to patient-related factors or hospital-associated factors, and it doesn't tell us the relative importance of those two.

So we thought, okay, let's go back and have a look at the original evidence that we use to underpin the Hospital-Acquired Complications Program. And as you mentioned a moment ago, we were a bit surprised to find that there was no clinical validation of the methodology, even though it has considerable clinical plausibility and has a lot of evidence and science behind it.

So we went back to square one and we said, Let's look at our health service, let's pull out some clinical records and do a very simple basic retrospective chart analysis. And we said, what we want to do is to identify what complications occurred, how often they occurred, and how many were being picked up by the coding data that's the basis for the Hospital-Acquired Complications Program and how many of those complications might have been missed. Because as you all know, coding has very fixed rules attached to it and whilst it captures most clinical events, it doesn't capture all clinical events. And, in particular, it's dependent on the quality of documentation by clinicians such as myself.

IAN SCOTT:         Thanks very much, Graeme. I guess it would be interesting just to look at this methodology; how you actually ascertained these HACs, both in the cases that did have coded HACs and those who didn't. Perhaps then also assertion as to whether you thought there was a preventability issue here, whether the care perhaps could have been better and might have prevented the event. And what you see, I guess, as the key implications for your findings.

GRAEME DUKE: Yeah, sure. So, in summary, it was a retrospective chart review. And what we simply did was pulled out a six week period of time and from the Health Roundtable report on hospital-acquired complications to our health service, we then identified those that were reported with HAC and had a clinician who was not involved in the care of that person then review that medical record for that index admission.

Prior to this, what we did is we developed an audit tool using the REDCap software which many of the listeners will be aware of. But basically, it's an online, very powerful tool to develop a survey questionnaire, so that we had very simple and largely binary or categorical questions about the nature of the admission, the patient's illness, their comorbidities, the treatment they received. And then with regard to the complication, when in the patient's journey that occurred and then what were the antecedent factors that might have led to that.

And we used our own internal clinical practice guidelines, as well as the national guidelines and the Australian Council of Health Standards, that provides a kit which has a lot of useful information about each of these hospital-acquired complications, and what the preventable factors might be. And they're the ones that we looked for.

Having extracted all that information, we then asked the clinician who was reviewing the case to make three subjective decisions. One is, do they think that the complication was likely to occur, given the nature of the patient and the treatment they received? Secondly, did they think that the management was suboptimal in any way, comparing, as I said, to the guidelines. And whether it was highly probable that it was preventable, potentially preventable, or likely to be not preventable? Now, those three judgments were obviously subjective, but they were made by clinicians who were familiar with the types of illness and risks involved.

So it's a fairly simple, straightforward audit process that most hospitals would be familiar with. And it enabled us to gain insight about how many of the complications were likely to have occurred as a result of patient-related factors, or more likely to be related to hospital factors that were potentially preventable. And I would say that we actually had a fairly low bar, so we were fairly tough on ourselves, and therefore, the findings were quite surprising and unexpected.

IAN SCOTT:         So Graeme, maybe then just recap perhaps in relation to the main findings. So you said that the prevalence of reported HACs, that is coded HACs per separation was about 10%. And two thirds of the 260 cases that did suffer an event had more than one event, and delirium was the was the most common. And I think most of us would not be surprised by that, delirium is a common complication.

But you also report that one in four of the coded HAC events were deemed to be false positives. But at the same time, you reported there were no coding errors, in other words there was there was supporting clinical documentation for the coders to say that was a HAC event. So I find that a bit surprising in the sense that, were the clinicians is getting it wrong, and writing this down, so is it really a problem with the coding algorithm? Or is it really more of a problem with clinical documentation?

GRAEME DUKE: I think that's a really good question. And this is one of the important things that we learned from it, and we're trying to address in our institution. And that is that it's a problem with the medical documentation primarily. We could improve the coding of complications by better documentation and by helping the coders understand clinical documentation a little bit better. And the reason for that is—let me give you an example with the case of delirium. A patient may come in with severe sepsis, and multi-organ dysfunction of which delirium is a component. But the documentation for all the other problems the patient has supersedes or is paramount in the clinicians mind. And that's what goes into the documentation.

Three or four days later, the nurse or doctor then documents that this patient who is now recovering, has delirium. So probably, I think, from memory 15% or so of the false positives were actually clinical issues that were present on arrival to hospital and were only documented for the first time several days later. And so according to the coding rules, even if the coder suspects that it was there on admission, if it's not documented on admission, they cannot code it as being on admission, but they can code it as being a later complication. That's one example.

And the second example would be an assumed or suspected complication. Let me pick an example would be, say, a urinary tract infection and a post-operative patient. Let's say, the nurse or the treating medical team are concerned, this patient might have a urinary tract infection and in fact, they initiate treatment for the urinary tract infection; for example, start antibiotics, maybe removed the urinary catheter. But then the following day, there's sufficient information in the notes for a clinician to say, “There was another cause for the fever identified, there was no bacteria grown on the urine culture, and it's unlikely the patient ever had a urinary tract infection, even though it was suspected and treated.” Yet the coding rules coded as a complication. And that was probably only about 10%. So that accounts for the one in four false positive rate.

And then other extreme; there were a number of false negatives. And an example of that would be again, where there was poor documentation that was insufficient for the coder to include that code, even though there was information available that was sufficient for the clinician reviewing the case to say that a problem occurred. Another example of that would be somebody who came in, let's say, for example, with pneumonia, but in fact, had a pulmonary embolus. The pulmonary embolus wasn't diagnosed, say, till 48 hours later. And the patient is then coded with an admission for pneumonia, when in fact, they had a pulmonary embolus, which wasn't coded on admission.

MIC CAVAZZINI:               And is it worth articulating one line; one aspect I pulled out from the paper was that once you'd combined the true positives and the false negatives, the true prevalence of HAC events wasn't 10%, but 16%? Is that a significant difference that you would want spell out, in your own words?

GRAEME DUKE: I think it's really interesting that the true rate of complications is actually higher than that reported in the hospital-acquired complications algorithm. But it's not surprising when you understand that it's based on coding data, which is based on the quality of clinical documentation, which needs to be improved.

Then the other problem with coding is the rules that are applied, and we're in the process of submitting and have been submitting for two or three years to HACPA, as it's now called, who govern the coding rules about ways to improve the coding. As you may know, sepsis is one particular area I'm interested in, and the coding of that is woeful. And so when it would be good to be able to change the national coding rules, to make them produce clinically meaningful information.

MIC CAVAZZINI:               Just to clarify how this auditing is done; So there was a first pass that was automated using that REDCap software, And then on review, you picked up a few more cases, the false negatives.

GRAEME DUKE: No, just to correct you, Mic. The REDCap software simply as a way of storing and categorizing the answers to the question. The review was undertaken by a clinician who had access to the entire record and then we asked them to make those judgments at the end. And if there was a disagreement, or there was uncertainty about the judgment then we got others to look at it. So it has limitations, I accept that, but it was certainly something that we were able to repeat and get a reasonable degree of uniformity. The other nice thing about it, it's simple and can be repeated by any health service.

MIC CAVAZZINI:               Well that's the thing. If you want to implement this routinely, how labour intensive is it or would you settle for the automated data scraping and tolerate a degree of error?

GRAEME DUKE: So the difficulty with the hospital-acquired complications algorithm is that it's agnostic to whether the complication occurred due to patient-related or hospital-related factors. So there's, at the moment in my mind, no way to get around this except for every health service to review their hospital-acquired complication reports, and pull out some records. Now, you do not need to pull out 700 records like we did, I would have thought that probably even if you pulled out 30, 40 or 50 records in a year and had a look at those for patients reported with a hospital-acquired complication, you would extract very useful information. And you can do that in a number of ways. The algorithm is publicly available, so you can extract it based on your own data internally, you can subscribe to the Health Roundtable, and in many jurisdictions, now, there's a reporting mechanism that provides hospitals with that information.

IAN SCOTT:         So Graham, can I just ask with the case reviews? Was each case just performed by one person or was it two reviewers that looked at each case? You mentioned that there was a concordance in terms of the kappa agreement, but I wasn't clear just how many reviewers looked at each in case.

GRAEME DUKE: So the majority were reviewed only by one clinician. If that clinician was uncertain, then it went to a second reviewer. I can't remember off the top of my head, but I think it was probably about a third of them. Something in the order of 150 to 200 records were reviewed by somebody else. I should point out that's very onerous and I’m not suggesting that every health service should go to that degree of detail. But it was necessary for us to do that in order to make sure that we were getting a reasonable assessment of the actual clinical events.

IAN SCOTT:         Yes, I understand Graeme, and I think it comes back to Mic’s question just how labour-intensive we would need to consider if we're going to go to individual case reviews as as opposed to picking cases on the basis of the coded data. Can I just say that, you know, from your study, you're basically saying that only just under one in five HACs, hospital-acquired complications, are potentially preventable, and that only just over one in 20 HACs actually reflect medical error.

When I look at your Table 1 that after you removed the cases where there was no documentary evidence of a clinical event, or where all appropriate management preceding the event was appropriate, only 4% of reported HACs would reflect potentially suboptimal care based on current guidelines. But you say the assessors could still identify an alternative management strategy that might have prevented the event. So can you just elaborate on that? What were these alternative management strategies in some of these cases?

GRAEME DUKE: Yeah, I think that's a good question. And I think when you look at any complication, you can review the antecedent events and say, “Yes, look, there are two or three ways of approaching this particular patient or this particular situation. And in this patient's case, a certain management plan was put in place, which is quite reasonable and may have evidence behind it.” And yet, it's not the only possible treatment. Let me give a specific and simple example, and hypoglycemia is a good one. For example, a patient, let's say, whose blood sugar levels are high, and the treating team decide that the insulin dose needs to be increased. But the increase in that dosing results in a hypoglycaemic episode let's say the following day. When we reviewed that case, we said “Okay, if the insulin dose had been increased by a smaller amount, this patient may not have suffered hypoglycaemic event the next day.” Now, clearly in the example I gave, if the insulin dose was doubled from 20 units to 40 units, that would have been a definitely preventable error. But when the insulin dose goes up by a small amount, all within clinically acceptable guidelines, often there's no clear evidence as to which is the better approach or not.

So there is clinical variation in the way patients are managed. And that's an example of how we tried to be pretty tough on ourselves as to what was potentially preventable, and why the potentially preventable errors we thought were probably in the order of one in five. And the other thing I'd like to emphasize on this topic is that it's easy to look at an adverse events and say, “Oh, we could have prevented that by, in this example, not increasing the insulin so much and maybe we should change our insulin guidelines.” But we need to be careful that in changing it to prevent one incident, we don't then introduce a whole lot of other errors or complications. And let me give you a trite example. We could stop all surgical wound infections entirely by simply not performing any operations.

IAN SCOTT:         Yes, I think that's a very pertinent point, Graham. I mean, I guess that hospital-acquired complications, there is a certain sense that these are regarded as “never events”. But in fact, I think you've spoken to the question of preventability, and not wanting to overshoot the mark, in a sense. So do you think therefore, given that up, like four or five of these HACs really attributed to patient factors, in other words, really not related to any quality of care, could it be that the one in five also involves a certain degree of, just unfortunate outcome despite people's best efforts. And that, if we try to give the impression that all these HACs could be preventable that we might actually cause more problems than it's worth? And perhaps also make us quite inefficient by subjecting people to interventions that in the end may not actually do them benefit?

GRAEME DUKE: I agree with you entirely, Ian. And we all want to decrease the frequency of complications. And I think the advantage of the Hospital-Acquired Complications Programme is that it helps identify the cohort or subgroup of patients who are likely to include those patients for whom a less-than-optimal management plan was associated with. But there is no way of identifying those simply by looking at the individual frequency or hospital rates of those events. The only way I can see that we, at this stage, can identify which ones are preventable, and that definitely need to be addressed is by clinical review.

MIC CAVAZZINI:               In terms of the broad brush strokes; the take-home message; So if over 90% of the reported HACs lacked evidence of sub-optimal care; there were primarily patient factors driving them as you say. And that tells us that we can expect that a tertiary institution that attracts sicker patients would expect more HACs in their records.

GRAEME DUKE: That’s exactly what you find.

MIC CAVAZZINI:               So we can't use these as a as a way of benchmarking performance between institutions. How well can they be used within an organization over time to direct quality improvement within an institution?

GRAEME DUKE: Yeah, that's a really good question. And I guess this really just quickly gets to the paradigm associated with HAC for, what is it, 15 years or more. We've had this paradigm or view that most HAC were related to hospital events that were preventable, and that higher rates possibly indicated poor quality care. What this evidence and other research into hospital-acquired complications that we've published this year suggests that, in fact, the hospital-acquired complications is actually a measure of hospital complexity. And that high rates probably reflect good quality care in hospitals that have efficient systems for identifying clinical deterioration, as Standard 8 would require us to do. And that, in fact, it health services with higher rates may, in fact, be much better health services that are able to look after complex patients, identify when they have the unavoidable deterioration, pick them up, treat them and prevent further deterioration. And that's the big advantage I see of the hospital-acquired complication project, it helps you identify that cohort that warrant further investigation.

MIC CAVAZZINI:               I think I read you identified in only eight cases out of the 260 HACs, an unequivocal healthcare error and this was less than half a percent of all separations. That seems strikingly lower than the figure of 10 to 12% of all admissions that we hear that are associated with medical injury. Did I miss something there in the way these are being….?

GRAEME DUKE: No, you're entirely right, and I am still surprised at that number. And I'm not suggesting for a moment that my health service is any better than any other health service. What we're saying is that despite the best of our intentions, and optimal care, as far as it can be delivered in this day and age, we still have patients who suffer complications. Let me put it another way, when a patient comes into the emergency department or a doctor's rooms, and we decide that they are going to be admitted to hospital and treated in hospital rather than sent home. One of the major factors that goes on in our minds is, “What's the risk of this patient having a complication with the treatment that they require or given the illnesses and the comorbidities that this particular patient has.” If the risk of that is deemed high by the clinician we suggest that they get admitted to hospital and not be treated at home.

And that's another way of looking at this to say that, in fact, most of the hospital-acquired complications are identifying problems that were expected by the treating team, and the very reason that patient was admitted to hospital. Now that doesn't justify the complication and does not in any way prevent us from trying to diminish the complication. And although it may seem as if I'm denigrating the Hospital-Acquired Complications Program that is not intended at all. We think it's a very useful program because it identifies that cohort that's worthy of further investigation, to identify whether there's preventable factors that need to be addressed, and also finding explanatory reasons for why those complications occurred, even if they are patient-related. So that we can then go to the patients and their families and say, “We know that this treatment that you require has got a high risk of these particular complications,” and we give better have informed consent. So I think the Hospital-Acquired Complications Program is extremely useful.

IAN SCOTT:         So, Graham, what's your analysis? You've mentioned here are the wards and the specialties in which these HAC events occurred gives us some idea that mostly in acute settings, and spread across specialists, general medicine and specialist surgery. Were you able to delve down a little deeper to try to identify particular patient cohorts who might be more prone to HACs?

GRAEME DUKE:That's a really good question, Ian, and that's what we're currently doing. So you mentioned that the acute care sites so the that HAC algorithm or program was set up primarily focused on acute care institutions. So what we've done now is we're doing some review of hospital-acquired complications in subacute sites to see if it's useful in that area. In particular, with regard to the sub-specialties, again, that's what we're currently looking at.

Also trying to identify, are there particular risk factors, particularly those that might be identifiable on admission that could categorize patients as being at higher risk than the average in that group for complications who, for example, might be better off in a higher acuity area, monitored environment, or where further discussions about treatment options are necessary? So that's, that's the next phase and we haven't got to any sort of particular answers I can give anyone today.

IAN SCOTT:             That's very helpful Graeme. In your audit tool, you did look at goals of care and acute resuscitation plans, and was I just wondering, in those patients, did you find that there was more incidence of hospital-acquired complications, as a result, perhaps, of care that may not have been indicated in such a population?

GRAEME DUKE:     Yeah, that's a good question. And what we found was that the rate of complications was higher, and not surprisingly higher in patients who are dying. You know, every patient often develops complications as an as an accepted process. So yes, there was a higher proportion in that particular group. We didn't find any evidence of deterioration that then led to a redirection of care based on the complication alone, if that makes sense.

IAN SCOTT:             Graham, there is interest at the moment, obviously, with big data sets and artificial intelligence and predictive analytics, trying to predict who's more at risk of hospital-acquired complications. In fact, I'm working with a group here at Princess Alexandra on that very issue. Where do you see the potential of that?

GRAEME DUKE:     Yeah, I look, I think there’s enormous potential, particularly because of the size, and now the quality of the data that's coming out. But I guess the thing that we've learned, particularly from this project, is that it's really important before you get into that to go back and spend time with the data, the metadata, and the methodology and make sure that you understand what it's telling you—sorry, what you're asking of the data and can it answer that question. And I'm hoping that in the future, work like you describe will, in fact, enable us to not only improve patient care but also inform research into better therapeutic modalities and interventions with lower complication rates and improve the quality of care delivered.

IAN SCOTT:             Thanks, Graeme. I mean, I think that in reviewing your manuscript, there was a number of reviewers that looked at this in quite some detail. And let's face it, it was a somewhat controversial article from the reviewers point of view. So do you think that in terms of I guess, the study limitations, let's be fair to them, in terms of the you know, the retrospective analysis, the small sample size, single site unblinded case review by multiple assessors and the short study period. So do you think you've adequately accounted for these potential limitations in terms of how we should interpret the findings?

GRAEME DUKE:     Yeah, it's a really good question. And two answers to that—completely opposite. So I think, yes, were pretty confident because we've had so many different senior expert clinicians involved who we think are trustworthy. So we're happy for our institution. But my other answer is to everyone else to say, “No, don't believe what I've just told you, or you've read in this article. Go and look for it in your own health service yourself.” Because the fact that the error rates are low in my particular hospital, may not be the case in your hospital, and certainly may not be the case in my hospital next year.

So that to me suggests that every health service ought to be at least reviewing some of their clinical records as probably the simplest and best way to do it at the moment in order to understand their own HAC data and their rates and what they actually mean. And we're happy to help and in fact, we're in the process, we've just finished developing a REDCap audit tool that we hope to make publicly available to anyone who's got a subscription to REDCap so they can just borrow and use our audit tool if they wish, rather than develop something of their own.

IAN SCOTT:             I guess the emphasis is to in recent times, particularly in the work of Adam Elshaug looking at HACs as a result of patients receiving low-value care, in other words, receiving interventions that perhaps weren't indicated. Again, did you get any sense of that in your review?

GRAEME DUKE:     Yeah, that's a good question. And yes, we did identify patients who underwent, let's say, procedures or treatment that was not agreed upon by every clinician. The one that springs to mind immediately is acute pulmonary oedema secondary to blood transfusion in a stable, anaemic patient where the national guidelines is a little bit grey. And I think that, you know, you could make a point for saying, in those patients, had they avoided the blood transfusion, we would have avoided the episode of acute pulmonary oedema and transfusion reaction.

IAN SCOTT:             And finally, I mean, in relation to I guess, you know, the morbid events leading mortality as a result of these complications, you did ask that, you know, whether these events lead to MET calls, code blues, transfers to high-level of care. Can you give us some sense of just how serious some of these complications turned out to be?

GRAEME DUKE:     Most of the complications didn't lead to an escalation in care. Most of them were identified in patients who were known to be at risk. So that's, that's the one thing. However, if you look at it from the other point of view, and this is my particular interest being an intensive care specialist, the critically ill cohort of patients has the highest prevalence of these hospital-associated complications. It's not one in six or so, it's one in two or even more frequent. So these patients are at particular risk, and we're currently engaged in a project to actually try and identify how many of them led to an admission to intensive care, how many occurred during the intensive care phase, and how many occurred afterwards to try and answer that very question, as one example.

 IAN SCOTT:            Thanks very much Graham, I think this has been a very important paper, and I'm glad it was that was published. It went through a very extensive peer reviewed, but I think you've made some very important points. And I think for frontline clinicians, I think the emphasis is that we need to perhaps more closely look at these complication rates, that perhaps hospital administrators and quality and safety analysts that need to be just a little more temporizing in some of their interpretations and conclusions from some of the data. And that clearly, a lot of these complications really are not necessarily prevented. They're related to patient factors that really are beyond our control. That's not to say we should ignore them, as you say, but I think we need to sort of just put them in context. And not be I think, emphasizing that every hospital acquired complication is preventable or a “never never event”.

GRAEME DUKE:     I couldn't agree more and we're willing to help any health service, you know, whether it's quality management person, a clinician or or a manager, help them identify and work through this complex but really important area.

MIC CAVAZZINI:   Many thanks to Graeme Duke for taking the time to talk us through his research paper, available now at the Internal Medicine Journal. Remember, that all members of the College can login to the IMJ, the Journal of Paediatrics and Child Health, and the Occupational Medicine Journal from the page
racp.edu.au/fellows/resources/journals.

Thanks also to Ian Scott, and all the other editors and reviewers that put their time into quality control at our Journals. Editor in Chief, Jeff Szer, deserves a special mention for putting so much in over the years, and even lending an ear to this podcast in an early draft form.

We’d love to hear what you think, so please post a Comment at the webpage or you could even use the College’s new networking platform for a sort of journal club. Check out RACP Online Community if you haven’t already, or search for RACP- The ROC in your phone’s app browser. Remember the podcasts are just one of many resources you can find by searching for RACP Online Learning. It looks like many paediatrics trainees haven’t yet discovered the College Learning Series. There are over 100 lectures already published that work through the exam curriculum. Recent additions focus on diagnostic modalities for paediatric cardiology.

I’m Mic Cavazzini, and I’m always happy to get your feedback via the email address podcast@racp.edu.au. This podcast was produced on the land of the Gadigal people of the Eora nation. I pay respect to the storytellers who came before me, and as Graeme Duke said, for their grace in sharing this beautiful country with us. Until next time, thanks for listening. 

Comments

Be the first to comment on this Podcast!

Thank you for posting your comments

24 Apr 2024
Close overlay