[IMJ On-Air] Understanding readmissions better

[IMJ On-Air] Understanding readmissions better
Date:
22 August 2024
Category:

Fellows of the College can record CPD hours for time spent listening to the podcast and reading supporting resources. Login to MyCPD, review the prefilled activity details and click ‘save’.

The LACE index is a prognostic algorithm for predicting the likelihood that a newly discharged patient will come back into hospital within 30 days because of complications. Today’s IMJ paper describes a validation of the LACE index in a regional Victorian setting. Identifying patients who are at risk could allow for better targeted care at the first admission, reducing harm to patients and inefficient use of healthcare resources.

The researchers also tested a novel classification tool for scoring which readmissions are avoidable and which are just an unfortunate outcome of the patient’s illness. This could help more accurately track quality of care within and between healthcare service providers.

Credits

Guests
Prof Christian Gericke
PhD FRACP FAFPHM AFRACMA FRCP Edin FEAN FAAN (Calvary Mater, Newcastle; University of Newcastle; University of Queensland)
Dr Reinhardt Dreyer (South West Medicine ; University of Stellenbosch)
Dr James Gome FRACP
(South West Medicine, Clinical Director General Medicine)

Production
Produced by Mic Cavazzini. Music licenced from Epidemic Sound includes ‘Treetops’ by Autohacker and ‘The Cold Shoulder’ by Kylie Dailey. Image created and copyrighted by RACP.

Editorial feedback kindly provided by RACP physicians Aidan Tan, Joseph Lee, David Arroyo and Stephen Bacchi. 

Key Reference and Further Reading

Causes for 30-day readmissions and accuracy of the LACE index in regional Victoria, Australia [IMJ. 2024]

Surgery's Rosetta Stone: Natural language processing to predict discharge and readmission after general surgery [Surgery. 2023]
Why do we evaluate 30‐day readmissions in general medicine? A historical perspective and contemporary data [IMJ. 2023]

Transcript

MIC CAVAZZINI:            Welcome to IMJ On-Air, from the Pomegranate studios. The Internal Medicine Journal is academic flagship of the Royal Australasian College of Physicians. I’m Mic Cavazzini, but in a minute, I’ll hand you over to the journal’s section editor for public health medicine.

The article discussed in today’s podcast was published online in the June edition of the IMJ, and is titled “Causes for 30-day readmissions and accuracy of the LACE index in regional Victoria, Australia”. The LACE index is a prognostic algorithm for predicting the likelihood that a newly discharged patient will come back into hospital because of complications. Identifying patients who are at risk could allow for better targeted care at the first admission, reducing harm to patients and inefficient use of healthcare resources.

Other tools like this do exist, but the LACE is the one which has undergone the broadest validation. Retrospective cohort studies have been conducted in the USA, Canada, Singapore and the United Kingdom, and there have even been three studies of the tool in Australia. However, these were limited to metropolitan hospitals, and the case mix was also restricted.

Today’s IMJ paper describes a validation of the LACE index in a regional setting, namely Warnambool, Victoria. This town of 35,000 marks the end of the Great Ocean Road, about three hours west of Melbourne. Only by comparing predictive validity in different settings can you say that an algorithm is not compromised by differences in local practice or patient population.

The acronym LACE is a nod to the four variables that feed into to the algorithm. Length of stay, Acuity of admission, Charlson Comorbidity Index, and the number of ED admissions in the previous 6 months. Once these inputs are crunched, a potential score of 19 is generated, and patients were stratified as being at high risk for readmission for any value above nine. A patient with a score of 10 is supposed to have a 10 percent chance of readmission, and for every one-point increase above this the risk increases exponentially.

In order to track quality of care and performance of the healthcare service as a whole, it would be useful to score which readmissions are avoidable, and which are just an unfortunate outcome of the patient’s illness. The second part of the study discussed today sought do just that. Conducting the interview with the authors is public health section editor Professor Christian Gericke.

CHRISTIAN GERICKE: Yes, thanks, Mic. Thanks for inviting me to host this podcast. So, my name is Christian Gericke and I currently work at Calvary Martin Newcastle as neurologist. And I'm also Professor of Medicine at the University of Newcastle, and Professor of Public Health at the University of Queensland.

MIC CAVAZZINI:            And it's your public health hat on rather than your neurology hat, in which today's paper is set.

CHRISTIAN GERICKE: Yes, well, most of my research is actually in health services research and health policy. And so, I had a particular interest in this paper because it's exactly where this paper sits, at the interface between the clinical world and the public health world.  So I would ask Reinhardt to introduce himself.

REINHARDT DREYER: G’day. Thanks, Christian. Yes. My name is Reinhardt, and I'm a general physician, and just recently qualified as a public health specialist and clinical epidemiologist working in the southwest of Victoria with James.

JAMES GOME: Thank you, and thanks for the opportunity to discuss our paper. My name is James Gome. I'm a clinician first and foremost, I'm an FRACP with recognition in endocrinology, and I work in general and acute care medicine. And I'm coming to you from one Warnambool, which is the land of the Dhauwurd Wurrung people.

CHRISTIAN GERICKE: Thanks, James. Maybe we should start talking about your paper. So my first question is what gave you the idea for this paper in the first place?

REINHARDT DREYER: So, patients who re-present back to hospital, when they get readmitted, stay much longer, they have a lot more complications. And also the readmissions to the hospitals are very expensive to the healthcare system. So, I came across this risk score probably about five years back. We were actually busy doing a remake of our electronic health system with upgrades and one of the things that we were thinking about is that we actually want to see if we can incorporate it as part of our discharge summary. So, the junior doctor who uses it, well, they actually take a couple of boxes and then potentially at the bottom of the summary, or as a popup box for the junior doctor, that says this patient has been assessed as a high risk patient for 30 day readmission, given their score.

And then on a follow up to that is we could think of doing some sort of trial or implementation trial of saying, well, “We've got outpatient clinics, there's a lot of evidence for early follow up versus what they call standard follow up. So, patients will often in as you know, depending on what their problems are, get told on discharge, “Go see your GP, in maybe a month or a couple of weeks or if there's a problem”. And in regional areas, there’s often the challenge of  limited number of GP so they wait six to eight weeks to see a GP. And that's often the time period that they come in, because it's often small, unresolved problems or medication side effects. Hopefully that’s answered your question?

CHRISTIAN GERICKE: Yes that’s very good. I think it's also useful for clinicians to hear that it's actually what you developed is something that's useful in the clinical setting, and not only in the managerial world when it gets to health data.

REINHARDT DREYER: You know, we've actually been able to set up basically a new outpatient service that has the capacity, now, to actually be able to follow up patients within two or three weeks. And the idea would be, if we identify these high-risk patients, should we see them within two weeks or four weeks, so we can actually compare two groups or even longer? Does it not actually make a difference whether they follow up early or not?

CHRISTIAN GERICKE: Yeah, thanks, Reinhardt. And I wondered—obviously, Victoria is a bit different from most states in that you have small regional health services, and everyone's using different systems. So it's easier to scale in other states where bigger areas use the same system. Or in Queensland, the whole public sector. So, I wondered what system you're using?

REINHARDT DREYER: So, we use an electronic health system called Track Care, which, basically incorporates everything. Everything's online, barring maybe a couple of paper-based scripts in things that we do. But again, the access to information is probably one of the things that made this project a lot easier. We could essentially just pull the admissions and readmissions that we needed and all the ICD 10 codes are already there. The main challenge was just setting up the infrastructure to be able to translate that into the risk stratification tool.

The limitation is obviously, obviously, if all the information isn't added, then that reduces the sensitivity. But overall, because we've got clinical coders that do that already, it's made the data pretty robust. Everything was automated and couple of buttons that you pressed on the program and it turned a whole list of ICD 10 codes into a risk to number tool number.

CHRISTIAN GERICKE: Sounds very, very elegant. Thank you. So maybe we should get to the results section of your paper.

REINHARDT DREYER: So, testing the accuracy of the LACE index itself, in this particular study, it sort of had a moderate accuracy. What we found was an area under the curve of 0.59. 0.5 is sort of not better than chance and 1 is a perfect test. And depending on which resource you use, moderate often goes from point 0.58 to 0.62 or a little bit higher. But it didn't fare as well in this particular study, and I think the main reasons or when we analyzed the data, so we have slightly higher numbers of readmissions for the period that we studied, which we suspect may have been influenced by the COVID-19 pandemic. But we didn't explore that too much.

MIC CAVAZZINI:            Sorry, Reinhardt, sorry to stop you. I think our listeners will be missing some context. Maybe we need to be a bit more explicit. So you've retrospectively looked at the records separate separated the high and low risk patients. And then the score, the area under the curve is those that were identified as high risk actually did come back in as readmitted patients. Correct?

REINHARDT DREYER: Yep, that's correct. So, we use what they call a nested case-control, so it's actually looking forward. And then the area under the curve is just a just a statistical test that helps them to say how accurate is this tool to actually identify those patients. So if they got a score of high risk for readmission, did they actually get readmitted? And again, in this particular study, we found just found a moderate predictive ability, which is sort of similar studies in North America or even other middle income countries that that identifies an area under the curve of between 0.55 up to 0.69.

MIC CAVAZZINI:            Obviously, we're not expecting a score of point nine and above, like we might for diagnostic biomarker like troponin. But what would a useful number be what is a good score? I mean, when I was recording an interview with Professor Ian Scott a little while ago about machine learning as applied to prognostic algorithms, the comment came out along the way that a lot of the classic algorithms are pretty crude anyway, and they're more useful to a junior clinician who doesn't have that built up experience and a senior clinician might well ignore the algorithm and just use their gestalt, which is their spider senses. So, what number would—yeah how better validated, would you like the LACE to be?

REINHARDT DREYER: It's not a perfect tool. It predicts most patients but obviously, it does get enrolled a lot of the times as well. As a follow up to that they actually developed the LACE+ index, which has about seven or eight other additional factors like age above 65, as well as biological sex to actually make it more accurate. And again, even with that, the LACE plus index, the best that I could get was 0.75.

MIC CAVAZZINI:            And is that better than clinical intuition?

REINHARDT DREYER: So yes, and no. Obviously, the number is higher but it’s much more complex to the extent that it's not being used. Because you can obviously develop the most complex tools that identify everything 100 percent accurately, but they are typically extremely impractical for clinicians to actually use it. So, I think, just from what I've read on area under the curve, I'd have to go into the research on it, but I think if you have a simple tool with an area under the curve of about 0.65 that would make it practical, implementable, and obviously way better than just your hunch.

MIC CAVAZZINI:            So go ahead, James.

JAMES GOME: Yeah, so with full understanding of Prof Scott's point of view, he's got years of expertise, that many people who have to make these decisions or look after these patients don't have. And not all of us are as strong clinicians, obviously, as he is. So, their complimentary clinical Gestalt is essential, but having something that is reproducible and accurate and available for junior staff, and for often junior staff who are still learning their way around clinical medicine, but to be able to do that safely and appropriately, is an adjunct, and I think a really important one.

CHRISTIAN GERICKE: Yes, and maybe something we have discussed it that obviously, even clinical experience and intuition, not only varies in the time you have spent in there; I just thought of examples of very experienced doctors who always do the same thing without self-reflection. And they might have actually perpetuated their bad practice over 20 years. And how are we going to measure this, if not in some kind of objective with some objective measurement tool, such as the LACE index or others?

JAMES GOME: Yes, Christian. That's an excellent point, isn't it? As we get more senior or more experienced, it's very comforting to say, “Well, I'm saying you're I'm right, because I say that I am.” But in fact, that we should be able to do better than that. And I would really encourage any clinicians who are listening to the podcast, to be brave enough to measure something, it doesn't matter what that something is. But there's real value for measuring, reflecting and then refining from that.

A worthwhile assessment would be to look to see whether this tool, the LACE tool could be adopted in other regional centres, with a view to seeing how generalizable it is. More useful from that should be whether then, having identified patients who are at high risk of representation, whether we or other health services are able to not just identify but implement processes and programs that will target appropriate interventions to improve health care, reduce readmission, and be cost effective. That's for the LACE index itself.

MIC CAVAZZINI:            Could I ask one more nerdy question about the numbers. Again, the first part of the study the LACE index, based on length of stay, the Charlston comorbidity index, ED admissions. Now, I think you've mentioned in the results that each of those was statistically different between the low risk group and the high risk group. Were any of them particularly strong drivers?

REINHARDT DREYER: So just in the LACE index itself, the Charlson comorbidity index is probably the biggest driver of validity and accuracy. You know, then the higher your Charleston score, the higher your risk, and there's a clear correlation in that. And that just overall, it's not just in this particular score. Then when they actually had the original development of the Charlson score was actually updated, I think about 15 years back where they actually included age in the Charleston score. So, that also increases the, the validity and accuracy of the school. In our particular study, what we found was adding things like age over 65—so once you reach 65, and above, there was a clear exponential increase in the your risk of readmission. Also, males were more likely to get readmitted and that's not only in this particular study, it's been mentioned them quite a lot of other studies for a variety of reasons.

And then also length of stay that's too short. Shorter than three days, increases the risk of readmission. So that's that fine balance where you don't want to keep a patient for too long, because then they get hospital-acquired complications. But if you discharge them too soon, they may just come back again.

MIC CAVAZZINI:            That length of stay thing sort of tells you how it is a quite a fuzzy proxy. A long stay could be a sicker patient. But a short stay It could mean poorer care. But who knows? All these things are filtering in without you knowing how noisy it is.

REINHARDT DREYER: The literature often reports that—but the definitions are variable—that a readmission within 72 hours or three days, is actually classified as what they call a “premature discharge”. So which means even if you initially did nothing wrong, you probably shouldn't have discharged them. So, that speaks to the clinical part of it, because you're going to get it right, sometimes you're going to get it wrong, sometimes. You can't always find that balance, but using the LACE index or, you know, just something that gives you a little bit more reassurance of is this a high-risk question or not? Is this the right decision?  That's often where I think tools like this add benefit, they should be practical, easily implementable. And that's often where I found these tools useful.

CHRISTIAN GERICKE: Yes, maybe we should go back to the second part of your findings.

CHRISTIAN GERICKE: I mean, this is the reason why I chose your paper, I found the key finding really interesting also for practicing physicians, that you found that over 40 percent of readmissions were, in fact avoidable. And this is also the factor that's most important, I think, for policy decision makers. So, we can get back to that.

REINHARDT DREYER: Looking at the classification tool—and what we did find is that up to 41 percent of patients that came back, potentially having a readmission that may have been avoidable. We are looking at doing a validation study on the specific classification tool. But, when we looked at just why they actually come back, what we found is that the commonest reason was often healthcare-associated infections, which was—we had about a 13.4 percent number of patients that came back which is slightly higher than the state average. The state average is about 8 percent. So actually, following the study, we also looked at what kind of infections; typically catheter-associated infections, lung infections, which can't always necessarily be prevented. But we could actually then look at what can we do better.

Part of that was also cardiac readmissions, where patients often come back during the second admission with fluid overload. And the first admission, that may not have been the primary reason for admission, so it could be, are we giving a little bit too much fluid in that first admission and maybe they are doing better. But with any fluid overload problem, eventually, that can catch up with a patient, and that's the reason why they come back. So, that could be used as more care with the amount of fluid that gets prescribed in the hospital. What could we do differently? If we’re seeing a lot of patients coming back for adverse drug reactions, because there’s challenges with a medication, should we actually have a dedicated pharmacy clinic to try and avoid that? And maybe I can ask James to jump in there. If you wanted to maybe address more the sort of the wider implications?

JAMES GOME: For the novel classification tool, as you've pointed out, Reinhardt, this is a new tool, and we really need to have it validated in other settings beyond our own. But I think it's a really exciting opportunity to do that. And if we were able to do that, to assess the tool’s reliability and effectiveness, then we really could apply that or use that in broader clinical applications beyond general medicine, potentially, and certainly beyond our local experience.

CHRISTIAN GERICKE: Yes, I wondered how much of the tool is generalizable to other specialties. So, I guess the more you go into subspecialty work, the less generalizable it is. I just thought of my own work in neurology, I think it would be very hard to use it end up probably it's not as good. But I can imagine that, for example, for a big general surgery service, it's probably very similar to what you are seeing. What do you think about that?

REINHARDT DREYER: I think you're absolutely right, Christian. So when I developed this classification, it was specifically looked at more general medical patients and surgical patients. Obviously, it does not include children, because they've got different disease processes, but it's something that we could all use, then your eyes. And you’re right, if you go into subspecialty medicine they see different diseases and readmissions would often be for different reasons, that might be either the disease or the treatment that's given for that, again.

MIC CAVAZZINI:            Again, just to be explicit; this novel tool was to separate the preventable—the avoidable from non-avoidable admissions. Which data were going into that algorithm? And how do you confirm whether it's right or not?

REINHARDT DREYER: Correct, yeah. So, obviously, it wasn't in this specific paper because we were limited a little bit by word count. But this classification is a combination of international frameworks, together with the AIHW or the Australian hospital group that basically has developed a list of reasons for readmissions that they use as a classification and it's a really comprehensive list—I think there's about 20 or 30 different classifications, based on an ICD-10 coding.

But again, this is often done or reported to the government after discharge, and is used more for statistical analysis and planning of long-term policy. So, the idea behind this classification was more can it be used as a point of care test for clinicians to say, well, I can't wait six months for this data to be released to actually do something about it. I have a group of patients who seem to be coming back, let's seem to collect let's classify them, what can we do at our health service level to actually try and identify the patients that's coming back? Why are they coming back? And what can we do about it? Whilst the government focuses, obviously, on the long-term policy changes that they want to implement,

JAMES GOME: Mic, as an analogy, we often think about with infections, we have broad advice on what to do for particular in infections, and that guides our antibiotic choice and it guides our decision-making. But, in fact, local data tells us resistances, tells us which organisms are more likely to be present. That's what this allows as well. So, what another way of explaining what Reinhardt saying is we've got this larger data set, which is reported back at a state level, or at a governmental level. What we want to be able to do is have that available to clinicians at a local level to identify what needs to be targeted for this local cohort rather than across a broader group.

CHRISTIAN GERICKE: No that's very good. I would have asked you two more research questions. So, one of them is, did you face any barriers in doing your work?

REINHARDT DREYER: For me, personally, was just time. Well, two barriers. Probably HREC, the Human Research Ethics Committee. So, the College does a good job of teaching us to do a bit of research but doesn't necessarily provide all the tools. And, it would need to be supplemented by either further training or getting a mentor that can help you because the HREC was particularly challenging. Not unreasonable, they had really good comments or reasons why they pushed back, but trying to navigate the HREC takes a lot of skill. So, I think that is one of the barriers that we face.

And then that we don't necessarily get dedicated time to do research if you're a clinician. You're employed, you do clinical work, and unless you've got employment as a research clinician, often time is a barrier to doing research. So, I think they should be more emphasis at a health service level of actually getting clinicians to do more research as part of their—not just saying you do it because you like it. I think they should actually have dedicated FTE for specifically for research.

CHRISTIAN GERICKE: Yes, no, I fully agree with you. And that's one of the main differences. I've worked in major European and UK, academic health centres and  that's one of the key differences. So, where research and clinical practice were integrated, and teaching as well, all integrated overseas. In Australia, it's running in parallel. And if you want to do research, you have to actually reorganize your own workload around it, and it's not, in general, supported, even in large tertiary hospitals. Well, it should be. James, you had a question.

JAMES GOME: Christian. For me, my comment would be acknowledging your strengths, and acknowledging where you need to augment them from other people. And so, I think this paper in particular speaks to a clinician-led desire for information to then be able to identify between ourselves what strengths we had, but also what gaps we needed. And having enough networks and enough relationships, and developing those strengths, to be able to enlist other people with other areas of expertise to make something that we're very proud of, but that either of us individually would not have been able to have achieved.

And so, I would encourage people listening who think, “Oh, this will be too hard, or that this isn't possible”, acknowledging and gaining from others’ skills, and that's what physicians are very good at, isn't it? We're very good at working in teams, and so, my encouragement would be for clinicians who feel they can't do research, you actually can. And it does get easier as you learn pathways and as you learn the skills and effectively a new language. Yeah, I would encourage people to do that it's been very rewarding.

MIC CAVAZZINI: I wasn't expecting the conversation to go in this direction, but it's helpful in the context of CPD, the Medical Council expects clinicians to be auditing their own practice, you know, how do you carve out the time and the permissions to do that efficiently without it being a drag? That's, that's really useful.

CHRISTIAN GERICKE: I think it's also useful for the advanced trainee research projects, that you have to start early to get your approval processes. Otherwise, you run into time difficulties, because you have to do that during your training. So, my advice is to start early and get help early.

MIC CAVAZZINI: I think we've covered it pretty well. That was a very thorough discussion. Yes,

CHRISTIAN GERICKE: You have enough material?

MIC CAVAZZINI: Yeah, that that makes the second part harder, the thinning it down.

REINHARDT DREYER Yeah, thank you to Mic, Christian and James. I think this was a good discussion.

CHRISTIAN GERICKE: Yeah thanks, Reinhardt. That's excellent. And I really like your approach, how you link your research—like, how it translates to the clinic, how it changes your prioritization for outpatient clinics, your approach to service planning, starting a new outpatient clinic. And, linking the inpatient world in the outpatient world, which in many hospitals run very much in parallel. So, I'm very impressed that you're in a small regional hospital, you're doing this and well, it's just one of the reasons why we have you on the podcast, more people should know about it.

MIC CAVAZZINI:            Thanks again to the research authors, Reinhardt Dreyer and James Gome, and to IMJ editor Christian Gericke for giving yet more of his time to this College project. I’m also grateful to all of the reviewers who chime in every month to have a listen to these audio drafts and provide feedback. You find their names in the show credits at racp.edu.au/podcast.

There you’ll also find the musical composers you’ve heard, and a link to today’s article from the Internal Medicine Journal. Remember that all RACP members have complementary access to this, along with the Journal of Paediatrics and Child Health and the Occupational Medicine Journal. Just go to racp.edu.au/fellows/resources/journals and follow the links to log in.

Of course, time spent reading journals and listening to these podcasts an count towards CPD for Fellows of the College. For each episode there’s a link that takes you to a prefilled MyCPD page logging this as category 1 Educational activity.

This podcast was produced on the lands of the Gadigal clan of the Eora nation. I pay respect to the generous custodians of this country. I’m Mic Cavazzini, thanks for listening.

Comments

Be the first to comment on this Podcast!

Thank you for posting your comments

19 Feb 2025
Close overlay