Ep92: Data-driven practice improvement

Ep92: Data-driven practice improvement
Date:
26 February 2023
Category:

Fellows of the College can claim CPD hour for listening to the podcast and reading supporting resources. Login to MyCPD, review the prefilled activity details and click ‘save’.

Medical and administrative records are normally collected to help the management of patients or institutions, but it can be time consuming to extract metrics useful for practice improvement. The field known as Practice Analytics seeks to transform these data and provide clinicians with a bird’s eye view of their case load and performance. Practice Analytics can draw attention to cases that stood out from the trend, not for any regulatory purpose, but simply to help clinicians reflect and improve. This could even act a shortcut to meeting the new requirements for CPD imposed by the medical boards.

Credits

Guests

Professor Tim Shaw (University of Sydney; Research Director, Digital Health Cooperative Research Centre)
Dr David Rankin (Director Clinical Governance and Informatics, Cabrini Health)

Production
Produced by Mic Cavazzini DPhil. Recording assistance from Jon Tjhia in Melbourne. Music courtesy of FreeMusicArchive includes ‘Transference’ by Ben Carey. Music licenced from Epidemic Sound includes ‘Emerlyn’ by Valante. Image by Courtney Hale licenced from Getty Images.

Feedback on this on this episode was kindly provided by Dr David Arroyo, Dr Joseph Lee, Dr Victoria Langton, Dr Aidan Tana and Dr Jia Wen Chong.

References

Exploring the Intersection Between Health Professionals’ Learning and eHealth Data: Protocol for a Comprehensive Research Program in Practice Analytics in Health Care [JMIR Research Protocols, 2021]
Attitudes of health professionals to using routinely collected clinical data for performance feedback and personalised professional development [MHA, 2019]
An exploration into physician and surgeon data sensemaking: a qualitative systematic review using thematic synthesis [BMC, 2022]
Hospital-Acquired Complications (HACs) List (12th edn) [ACSQHC]

2023 MyCPD Framework explained
MyCPD interactive handbook
Podcast Ep39: Rebooting CPD Part 1—Origins
Podcast Ep40: Rebooting CPD Part 2—Feedback and Audit

Transcript

MIC CAVAZZINI:               Welcome to Pomegranate Health, a podcast about the culture of medicine. I’m Mic Cavazzini for the Royal Australasian College of Physicians.

Today’s podcast is about making the most of routinely collected data to help tune up your practice. There are gigabytes of information added to electronic health records every day but they’re usually designed to guide management of the patient or the hospital. In a field known as Practice Analytics, these data are transformed to provide clinicians with a bird’s eye view of their case load and performance.

The basic idea isn’t new. There already exist registries that compare complication rates between organisations, but the resolution is too coarse for the individual practitioner to learn much. Maybe you’ve gone to the effort to conduct a retrospective audit of records to get better metrics on your team’s performance. But who’s got the time, right?

As we’ll hear, Practice Analytics seeks to provide personal feedback to clinicians with a faster turnaround. It can draw attention to cases that stood out from the trend, not for any regulatory purpose, simply to help you reflect and improve. This could even act a shortcut to meeting the dreaded new requirements for CPD imposed by the medical boards.

Practice Analytics is one of forty projects being tackled by the Digital Health Cooperative Research Centre. The Digital Health CRC has partnered with the RACP as well as many government health departments and providers of healthcare and tech. The collaboration is mid-way through a seven year Commonwealth grant that was secured by Professor Tim Shaw and collaborators.

TIM SHAW:        So thank you. So I'm Professor Tim Shaw. I'm professor of Digital Health at the University of Sydney. I'm also the lead researcher in the Practice Analytics Luke's project we're going to talk about today.

MIC CAVAZZINI:               And joining us from Melbourne studio, that’s to say his office, is David Rankin, Director of Clinical Governance and Informatics at Cabrini.

DAVID RANKIN: Thanks, Mic.

MIC CAVAZZINI:               Practice Analytics is about using clinical indicators from medical records to help a clinician reflect on their practice. David, what kind of indicators are we talking about? I’ve heard you describe how when you started your role you had a list 131 items long. Are these all coming from the EMR or are there different sources as well?

DAVID RANKIN: It's been a fascinating journey. And as you reflected, yes, we had something like 131 indicators around the hospital. And when I tried to trim those down and say, “Well, which ones are actually meaningful for clinicians?”, there's very few. With hospital operations, the data has always been used to inform management. And when I look at the KPIs that our chief executive focuses on each month for her board reports, they’re completely irrelevant to clinicians.

MIC CAVAZZINI:               Specifically, what are the handful of indicators that you've honed in on and identified as being valid?

DAVID RANKIN: First of all, we will look at the core procedural processes, how long did the patient spend in theatre? How long did the patient spend in hospital? And then we have a few indicators that try to predict outcomes. These are things like, did the patient have a MET call and end up in ICU? That's unexpected. Did the patient end up going back to theatre? Did they have a bleed? Or did they have some complication that took them back to theatre? Again, that's unexpected. Did the patient come back to hospital within 28 days of discharge? That's not always expected. Sometimes it is, but it's usually not expected. Did the patient have a transfusion, for a procedure that doesn't usually require a transfusion? We have some other metrics, like did the patient have robotic surgery, or laparoscopic surgery, when those sorts of interventions are not normal for that type of patient.

MIC CAVAZZINI:               Maybe just taking a step back, tell us about your—you've got a cohort of 300-odd proceduralists. So when you pick out any one of these indicators, are you looking at a plot of each clinician’s score on that metric?

DAVID RANKIN: So we have a two stage process. As soon as I get notified that a patient is gone back to theatre, I’ll write to the doctor and say, “Were there procedure-related issues, were there hospital-related issues? Have you done open disclosure? Are there issues that we should be aware of that we could put in place to try and avoid this happening again in the future?” And that provides usually within 24 hours an immediate response from the clinician to tell us why this unexpected outcomes occurred.

Then on a quarterly basis, we collate up all of the procedural outcomes for all of our specialists and provide them with a personal report. We break the procedures down into clusters. So all of the MBS codes for colonoscopy are put together for colonoscopy patients. And then we look at the process of colonoscopy; did they come in the night before? How long did they spend in theatre? Were they day cases or overnight cases? Did they require a transfusion? Not usual for colonoscopy, but every now and again, one does. Did they come back to hospital within 28 days.

We then provide that data to the individual clinician saying, “this is your outcome across the seven or eight indicators. These are your peers. These are your volumes compared with your peer volume.” And then we highlight the cases of interest and say, “Here's two or three cases out of the 250 colonoscopies you did over this quarter that that you might like to have a look at because they don't seem to have followed a normal pattern.”

MIC CAVAZZINI:               Procedural medicine, surgery, it's a quite a constrained process. And maybe those indicators are easier to pin down. Is it a more complicated process for other types of Internal Medicine?

DAVID RANKIN:                Procedural interventions are much, much easier to track for two reasons. One is hospital care is episodic. And a surgical episode tends to be a constrained single episode; you have your gallbladder out, you don't have your gallbladder out again, you don't normally expect to come back to hospital for you to have your gallbladder out. But if you've got something like heart failure, you may well come back to hospital, even if you're getting excellent care at regular intervals.

And there's two problems, we might be on bypass next time you need to come back to hospital. So you'll end up at another hospital, which we have no idea about. We don't track deaths after hospital. You're discharged from Cabrini, and we really don't know what happened to you afterwards, unless the coroner writes to us and says, “This patient died six weeks later. Please explain what you did.”

MIC CAVAZZINI:               The gallbladder removal should fix the problem. Whereas with other types of practice, it's a bit more trial and error; “Will this medication work? What range do we want to get the outcomes within?” It's not quite as black and white is it?

TIM SHAW:        And then I think in many of the non-procedural disciplines, that's where we perhaps use registries more as well. So I've worked in cancer, where there's a lot of work done around ‘time to treatment’. There's lots of metrics you can capture around somebody's chaemotherapy protocols, you can capture them ‘chaemotherapy close to death’ is one of the metrics that people use in terms of, perhaps, looking at overuse of chemotherapy at end of life.

So I think there are other metrics we can use. So I think over time, we will start to use an interface between the type of data that David is talking about, which a lot of that is patient administration data, PAS data, and start to merge that in with the kind of clinical data that we have. And I think as the system matures, that's what we're going to have. The trouble at the moment I think is maybe the registries we’re creating are not actually used by clinicians very much. You know, how often those registries really turn around and start to feed that data back in is limited. So at the moment we're, we've got a more limited data set, particularly in the private health system.

There's some key ones as well that we're looking at certainly around patient-reported outcome measures and patient-reported experience measures, which give you a different set of data. I mean, the oncologists and as I said, I've done quite a lot of work in oncology—they're really keen to know what their patients think. You know, it really is those PROMs and PREMs layers. I mean, they might be technically very competent, but they really don't know how their patients are responding, what are the patient-reported outcome measures and so on. So yeah, there's, there's a definite hunger for that. And you know, as a high-performing potentially high-risk industry, it's unusual at this point that we're still just feeling our way.

MIC CAVAZZINI:               David, when we chatted briefly the other day, you said that performance data are often over reliant on averages.

DAVID RANKIN: It comes back to the issue that managers rely on trends and graphs. The clinician relies on the individual patient. As soon as you give a clinician a graph or a trend, they immediately start to self-justify, “Yes, but all my patients are older. All my patients are sicker. GPs only send me the nasty ones. That this is the area that I specialize in, I've got a reputation.”

On the other hand, the cardiothoracic surgeon may have an average theatre time that's within normal limits. But that's because he's very fast, and slick on most of his procedures. And he's probably had two or three that have been absolute disasters, and his average looks really good. You can look at median, but that really doesn't make a big difference. But when you go to that same clinician and say, “Look, you admitted Mrs Smith, last week, and this is what happened, let's talk about it,” you immediately get much more nuanced discussion and feedback, that helps that clinician reflect.

And we're quite happy if they say it was unexpected, but unavoidable. You know, “I did the procedure, because there really wasn't anything else and the patient desperately wanted me to, and there was a chance that it was life-saving.” That will completely throw out your averages, but there's a good ethical, clinical judgment reason why the clinician did that procedure. If the clinician says, “Look, yeah, I wasn't paying attention, and I probably should have done that”—great piece of reflection. We say, “Excellent, thank you. Let's just keep an eye on it and make sure it doesn't happen again.”

MIC CAVAZZINI:               In one of your group seminars, which I'll link to on the website, Julian Archer, the former GM of education at the College of Surgeons, said, I think he was quoting a government minister, “we want all doctors to be above average.” The absurdity of that statement highlights how tricky it is to present statistics in a meaningful way for a clinician.

TIM SHAW:        I mean, this comes to the heart of what we're trying to achieve here. This is about performance improvement, rather than performance management, although obviously there's a tension between those two. So I think everything that David and Jeanette Conley at the SAN and the other groups we work with, is really working with conditions to continuously improve where they're at. So this is not about league tables. This is not about direct comparison of..

MIC CAVAZZINI:               Benchmarks? How useful are benchmarks?

TIM SHAW:        I don't think it's so much about those benchmarks. It's about giving teams and individuals access to information or support their continuous proactive monitoring of where they're going, and how they're performing as individuals and teams. So it's not about trying to pick the bad apple. It's a different mindset; we've had this in health for many years this challenge that every condition has to be perfect. I mean, “To err is human” this has been in the media for many years now, in terms of we have to acknowledge that clinicians make mistakes, they need to be helped, they need to continuously improve. Not everybody can be at the top of their game all the time. And to me that underpins the whole idea of practice improvement and Practice Analytics.

DAVID RANKIN: There are certainly cases that come up every now and again where we see a clinician that has a consistent pattern that is worrying. That a surgeon will take a patient or group of patients back to theatre more frequently than we think his peers do. It’s not diagnostic, I can't tell if this is a good doctor or a bad doctor. But what I can look at is say there are some things that are happening around this doctor's practice, or this doctors, patients that don't seem to be aligned with expectations. And so it gives me the opportunity to engage the doctor. And say, “You might like to look at these patients. I don't know what's happening, I think I think this needs to be explored.” And we'll talk to the surgeon, we’ll talk to the craft group. And sometimes we'll bring in an independent expert, to have a look at a series of case studies.

That's rare, but it's a integral and important part of performance management, performance appraisal, clinical analytics. It shouldn't be the sole use of data to find those doctors that need an independent review. Most doctors have great self-evaluation and awareness, and are prepared to look at individual cases and see if they can improve and put those improvement processes into place quite quickly.

MIC CAVAZZINI:               Now, you've suggested how all these dashboards don't necessarily mean anything. What's the right approach to presenting this information?

DAVID RANKIN: As a clinical information, I love dashboards, and I play with them most of the day. So I've developed this massive data dashboard for anaesthetists and I can look through and see all the different patterns and things like that. I've shared it with three or four of our anaesthetic leaders, and they've gone stunningly quiet. When I've taken that dashboard, and teased out some of the issues and said, “Look, I think there's a problem with nausea and vomiting after anaesthetic with these small group of a anaesthetists”, they've said, “Oh, Jeez David, this is brilliant.” You've really got to take the data and tease out the essence, and then provide that essence in a meaningful way to clinicians. There are some clinicians that are great at playing with dashboards and spreadsheets and pivot tables and whatever but we shouldn't assume that just because we've given clinicians access to their data or their dashboard that we've moved the quality improvement process along.

TIM SHAW:        When you look at reflections of the literature that was done a number of years ago, where people were just given information and the literature shows that doesn't have impact, well, that's no great surprise, because there's nobody there to help you through with that. I think a key piece I've increasingly come to realize is what I'd call the David or the Jeanette factor, which is, which is really how we start to introduce the kind of learning analysts into this mix. So we have lots of other types of analysts within the system, but I'm increasingly seeing that we actually need people that are much closer to the coalface, really understand the clinical data, often have a clinical background themselves. But they have to be close to the teams because they've got to really understand the context of that hospital and of those care teams at some level.

MIC CAVAZZINI:               I imagine it's not a case of these doctors getting called into your office, David, you know, like getting called to the Principal's office. Is that a more forecast—do they know that four times a year or something that will be these kinds of meetings? Are they expecting this kind of depth?

DAVID RANKIN: I think that's really critical. Cabrini some years ago got itself into quite significant difficulty with the clinicians, where a well-meaning senior doctor used data that was probably not as clean as it could have been to try and hold doctors to account. And so the organization started making changes to theatre availability based on things like on time stats and theatre utilization. And the data was not recorded well or accurately. And it created quite a ruckus, which was most unfortunate. Over the last couple of years, I think we've built a much more trusting arrangement. But there's a couple of things that we need to be careful of. Who has access to the data?

We put out our quarterly data last week to most proceduralists. And I've been delighted with the emails that have come back. Most of them have said “Great data, David. Thank you very much. Really appreciate it, that’s really helpful.” A couple have come back and said, “Your data is wrong”. One of our urologists said, “You said, I only did two robotic procedures. I've looked at my data I did three. You’ve missed one, therefore your data is crap.” That's opened up a great piece of discussion, saying, “Look, I rely on these codes. Either you've put the wrong code in or our coders have done it wrong.” And we've had several interactions now to improve the way we extract the data to make sure that yes, we will next month pick up all of his cases.

Others have come back and said, “Look, I don't think you should include this procedure in that group. It's skewing the data. I do a lot of this procedure, you're not comparing apples with apples.” That's the type of dialogue that is incredibly useful and helpful, and helps us continually refine the reports that we put out. It also gives me assurance that the doctors that are receiving the reports are reading it. And we've got active dialogue. That's exciting.

MIC CAVAZZINI:               You've both supervised research staff who have surveyed practitioner responses to this Practice Analytics concept. And there is a lot of variation in sentiment on how granular they wanted the findings, on whether graphical or descriptive forms were more useful. Clinicians earlier in the career in their careers tended to be more enthusiastic than those already well-established. And this paranoia, of course, that analytics will be used as a stick, Tim takes over the most revealing responses from that 2019 survey in the MJA.

TIM SHAW:        So we surveyed—well, we ran a number of focus groups, actually, with clinicians at all different levels of training. And actually not just doctors, we talked to nurses and allied health professionals as well about what they felt about the use of performance data in this context. And, look, I don't think paranoia is the right word, I think there's genuine concern. I think they're rightly concerned about this data and information, because I mean, the truth is that it is often used against them. I mean, my experience in most hospitals is that usually it's built around a problem. Which is at the heart of quality improvement, which has enormous value, right? But quality improvement is about looking at where you have a problem.

So, I mean, I think the first thing that we really experienced was there is a genuine enthusiasm for people to access this information. There's a genuine want to do that. Obviously, with the caveats, though, that it has to be done very carefully. So the quote I had from a young doctor was that, “If there's a policeman in the room, then I'll do only the minimum that's required.” So I think that really says that , if you don't do this well, then doctors will see it as a fishing exercise to trip them up or the data be used inappropriately downstream and so on. So we have to build those structures in.

EMR data, I always say, is looking at the world through a letterbox, in a way. It's a snippet of information, so it's hard to draw a complex picture, which is why you need to then go into that kind of narrative discussion about what was the case? What were the things around that? What don't we understand from the data that's going there.

MIC CAVAZZINI:               And there was a more positive quote in there as well, from an oncology Fellow who said, “Honestly, I think this is something I would drop everything to do. Otherwise, how on earth would I know what I'm doing is right?” And that speaks to the issue of once you've finished your training, there is no regular feedback, there is no one, nothing built in. David, you've mentioned how that's how this understanding has matured in your time at Cabrini from what they thought it was going to be used for and what it's actually been useful for.

DAVID RANKIN: It raises the whole issue of benchmarking as well. Here in Victoria with VAHI—VAHI produced excellent reports on hospital-acquired complications by hospital, which at the management level creates quite some excitement. But that level of data is way too high to get clinical change. So looking at our infections, infections, predominantly urinary tract infections and pneumonia. At an organizational level, saying we've got a problem with urinary tract infections, doesn't make change. Everybody gets focused on it, but you don't have owners.

It was only when we drilled down on urinary tract infection and said, “It looks like it's orthopaedics”. And then said, “It looks like it's elderly women in orthopaedics.” And then said, “Well, it's actually the elderly woman with fractured neck of femurs,” that it got specific and we could start getting ownership. So we then approached the emergency department and said, “Look, you're putting catheters in, you need to tidy up your insertion technique.” And we went to the orthopaedic surgeons in the wards, and said, “You need to make sure catheters come out on a timely basis.” By getting specific and ownership, we've been able to reduce urinary tract infection rates materially.

So again, it's different by craft group. So we looked at cardiothoracic surgery, their HAC rate and we were alarmed when we saw there was something like 25 per cent of cardiothoracic surgical patients had a hospital-acquired complication. Until we got data back from three of our other, very comparable hospitals, and found we were spot on the same average rate. We're now much more concerned about hospital-acquired complications in areas like ENT and ophthalmology, where their rate is tiny, but when they occur, they're imminently preventable. So, benchmarking, again goes back to this average. You can look really good on average, but when you drill it down to an individual specialty, you start to see variations that that require action.

MIC CAVAZZIN: In a recent IMJ On-Air podcast, I talked with Graham Duke at Eastern Health intensive care about his research trying to validate the healthcare acquired complications program. After auditing medical records by hand, he and his colleagues found that the presence of complications was rarely a marker of poor quality of care. It didn't indicate that best practice guidelines had been overlooked, but rather that the risks could have been predicted based on the patient's condition on admission and so on. And, David, you've got a couple of students who have drawn similar conclusions about ‘length of stay’ as a clinical indicator. So yeah how do you go about validating, and making sense of these indicators that are still a few degrees of separation from—well, when there are so many confounds that can affect the patient outcomes?

DAVID RANKIN: The challenge is what is perfection? Where, where are we aiming for? I think urinary tract infections are a good example. Where so many clinicians accept urinary tract infections as, you know, an expected consequence, particularly in the frail elderly and places like that. And yet, I'm on a hospital board, who has a senior clinician from the US. And she's appalled at Australia's acceptance of urinary tract infection rates, even in the elderly, where her hospital in the US has effectively been able to completely eliminate urinary tract infection. So the challenge is, what level of complications do we accept as unexpected but unavoidable? And what do we owe to our patients as a reasonable standard of care? That's a question I don't think we've really entered into in Australia at this stage. I think that's probably the next phase of performance analytics is –what is best practice?

MIC CAVAZZINI:               Getting back to those focus groups, those surveys, the “policemen in the room”, which Tim said, it's not a paranoia, it's potentially a real fear. I mean, you've also mentioned the Hadiza Bawa-Garba case, a paediatrician who lost her licence to practice after the death of a 7 year old child. Her reflection notes, that her consultant had asked her to write, were then used in the hospital’s internal investigation of the incident, though not to prove liability in court as some sympathetic media reported”. Have you already got some firm rules about who does get access to this and how it is used?

DAVID RANKIN: We have informal rules. I think they need to be structured. And one of the components that I'm hoping that the DHCRC project will come out with is some rules of engagement. At what point should I inform my chief executive who's a non-clinician about individual clinicians’ performance? I think that should only occur where we've, first of all, engaged with the clinician and said we have a problem. We've engaged with the craft group lead within that craft group to say, yes, there is probably an issue that needs to be answered. And probably undertaken a chart review or a pattern, and then inform the chief executive that we have identified a potential issue and how we're going to manage it.

I don't think the non-clinicians should be party to those day-to-day conversations about patients of interest. So I think there's a lot we as clinical managers and informaticians owe to our clinicians in providing them with comfort about how we're going to use the data, who's going to have access to the data, what the consequences of that access would be and the process that we would use to enter into a dialogue and investigation.

MIC CAVAZZINI:               So this might be like in terms of agreement form that both the practitioners and the hospital sign? Tim?

TIM SHAW:        Look, I think this area is one of our most challenging, I really do think it's a challenging area. Perhaps bigger than the technical challenge, in many respects, is what's going to come out of this in the medical legal sense. And we do have a dedicated PhD who's actually working on this just started, she has a legal background. So I think we were looking out of that project to have some really practical recommendations for organizations about how we can best approach this. Because it is I think, it's fair to say, a little grey at the moment, about where the data is and where it sits and how you manage that. So my great fear is that we'll overreact in one direction, and then clinicians will walk away, so I think we have to get this right. Because if you have an organization that clinicians don't trust the leadership, then they're not going to touch the data, or they're not going to engage. So regardless of how many frameworks or legal documents you have, or however you approach it, I think if you don't have that trust built, then it's going to fail.

MIC CAVAZZINI:               And that that relationship between clinical leaders and administrators is something I'm going to talk about in another episode, because it all starts from there. In the webinar I mentioned before you also had the past President of the RACP John Wilson on board who was pretty clear that he had no interest in accessing Practice Analytics data from an auditing point of view. The chair of the Medical Board of Australia, Dr Anne Tonkin said the MBA need would only ever get involved in the way they normally do. If the practitioner and the hospital and the College hadn’t done their due diligence to follow up.

TIM SHAW:        Absolutely. And I think there's a genuine commitment from MBA and others that are very aware of this project, obviously, you know, very supportive of the project, actually, because they want to see this come through. And this isn't another layer of policing for them, this is not what they’re wanting. They're absolutely wanting this to be the supportive program that actually helps support the Professional Performance Framework, which I'm sure we're talking about. And I've never heard an Australian college want to have this data to use it in any way, from a college perspective to influence progression or scope of practice, or whoever it might be.

DAVID RANKIN: It raises a really interesting point. Data is so often seen in the negative. It's there to catch the outliers. I think that probably a more important aspect of data is to give assurance, to high-risk proceduralists in particular, that they're doing a really good job. I had a surgeon call me about six months ago after he received his individual report, and said, “What does this all mean, David, I don't understand it.” And I said, “Hang on, hang on, settle down. First of all, it says you're a really good surgeon. You're performing slightly better than your peers on both length of stay and theatre time. And your outcomes on readmission rates are really low.” “Oh, he said, “Does it? I really like this data. Tell me more.” And then we were able to say, “But hang on. There's two patients that I think are worth looking at, because the outcomes weren't what I expected.” “Oh,” he said, “Yeah. All right. You got me.” And we then had a really positive dialogue and he's now one of the champions of the data reports amongst his peers.

So giving this surgeon some of the I don't know, perhaps first feedback that he had, that he was a good, solid, high-performing surgeon is really positive. We often don't see the data in that positive light.

MIC CAVAZZINI:               Tim's already mentioned, the Professional Performance Framework; the dreaded, “Strengthened CPD” that the MBA has been has brought in this year. Category 2 is “Reviewing one's performance”, which typically has included feedback from colleagues and patients. Category 3 is called “Measuring outcomes” and involves time-consuming audits of patient records or incident reports. So it's early days yet but, Tim, do you think that Practice Analytics could automate some of the grunt work that's required for this?

TIM SHAW:        Absolutely, this is some of the process we're looking at. And look, as you said we’re what, kind of only six weeks into this program cos it kicked off on the first of January this year. So the policy and process and so on is still being worked out. The way I see it is if you're a physician or a surgeon at Cabrini and you take part in David's program, then you've been taking part in a meaningful reflective practice. And ideally, that should be an audited trail that goes straight up into your CPD platform. And we are experimenting with that at Sydney Adventist at the moment about whether you can automate that process. So if you've been in this process, and you've done it, then you get your points. Then I think everybody's happy. You haven't had to step out of your practice, and go, “Oh, my God, Where's all my data?” Do that audit and try and make that happen. Ideally, I think we just have this continuous cycle of review and reflection, you can actually have your embedded experience within your facility, and then that feed up.

MIC CAVAZZINI:               You've both mentioned other registries and the Healthcare Roundtable, which catches hack rates at different hospitals. There's the QIDS in New South Wales, the Quality Improvement Data System. In Victoria there’s VAHI and even a patient experience platforms called HOPE. This isn't a whole paradigm shift away from that. What it’s doing is using the same sorts of data to prompt that conversation about reflection, right?

DAVID RANKIN: We see performance data and registries working incredibly closely together. As I said earlier, hospital data is episodic. We really only know the patient from the point of admission to the point of discharge and other aspects of that data we really can't pick up on. Registries complement that enormously. And you know, if I look at the joint registry, they follow patients for years afterwards that that we don't do. They also make sure that clinicians get access to data outside the one organization. So we only have access to Cabrini data. We don't know how clinicians perform or operate at other hospitals, whereas the registry brings all of that together. So I think there's enormous opportunities for hospitals and registries to collaborate, and ensure that we get much more complete automated and streamlined data collection.

MIC CAVAZZINI:               Yeah, closing the loop.

TIM SHAW:        I think the challenge we fundamentally have is that none of these systems capture the data are designed to give it back in this way. I think this is the whole problem. I mean, all electronic medical records at the moment really just capture that data. And again, it's episodic, it's around the patient. It's no—there's no attempt to really longitudinally aggregate and collect that data over time.

In New South Wales, we have a single digital patient record going in place at the moment, at least in all the public hospital systems. Those type of systems have huge potential to support programs like Practice Analytics. But it's not what the system is designed to do at the moment. It's probably the next generation where somebody will work through to say, actually, these electronic medical records should be designed to wrap themselves longitudinally around clinicians as well, as well as looking at the patient perspective.

MIC CAVAZZINI:               And in terms of the behavioural aspects; I mean, listeners might be rightly be sceptical of CPD, generally. It's a bit of a chore and it's very hard to prove the effectiveness of CPD on changing practice. You know, there are so many degrees of separation to practice and outcomes, whereas this is a lot more immediate. Do you take heart from the work of the Department of Health’s Behavioural Economics Research Team? They, a few years ago, ran a campaign to GPs regarding antibiotic prescribing and they found that it was associated with a 12 per cent drop in antibiotic scripts. Is that the kind of behaviour change that you'd like to be able to measure?

DAVID RANKIN: Absolutely. It's that sort of behaviour that we would like to quantify. It's extraordinarily difficult. Organizationally, we've made enormous differences with more than half their hospital acquired complication rate. We've almost eliminated night before admission for colonoscopy, except for the very frail, frail elderly. We've reduced our average theatre time, across a number of different procedures. Can that be attributed to sharing data with clinicians? That causation is extraordinarily hard to demonstrate. But we believe it's absolutely made a contribution, and will continue to make a contribution because we've now got a point of discussion.

TIM SHAW:        I think clinicians don't necessarily look on that evidence around CME and CPD enormously positively when they think they've got to do their CPD, I think that's well recognised. But to change behaviour, I think you need to get much closer to the coalface of where that care delivery is. And that's why we're shifting towards this reflection and audit where there is there is evidence, such as the study you started to show. It is really challenging, though, to actually demonstrate causality in outcomes for patients but it's not impossible. So I think there are ways you can actually create experiments to allow us to do this.

And I think that’s another interesting area for future research; as we a get more mature understanding of somebody's practice, then can we use that to almost leapfrog over the whole data display and actually start to send them case scenarios that relate to their practice. So we'd be doing that particularly young doctors, say, looking at what they'd been doing in ED overnight and extracting the data down and understanding the cases that they've looked at. And then maybe send them three case scenarios that relate to challenging areas that we know young doctors might struggle with, and particular case areas. So I think that's another very interesting area of this. It’s like a link between Learning Analytics and Practice Analytics, agin, to provide that education and feedback back.

MIC CAVAZZINI:               Forgive me for picking up the pessimistic outcomes. But there are perverse examples of the “what gets measured gets managed aphorism.” For example, in the UK, the infamous four hour rule brought into the NHS emergency rooms almost two decades ago, which is said to have caused a rush of low triage cases out the door just before the four hour mark, while more seriously ill patients are left waiting longer than necessary. Are you worried that there might be such perverse behavioural responses or is it is that it will be easy to detect those and accommodate?

DAVID RANKIN: There's two sides to that. Do we worry doctors that they will be caught up in our performance process so they only bring the easy cases to Cabrini? A number of our orthopaedic surgeons work at two or three other hospitals. I don't know why they take some of their patients to one hospital and bring their patients to Cabrini. At this stage, we're positive that our process of providing comparative data back to clinicians is seen as a comfort thing. And most clinicians feel that that level of accountability as positive, and therefore hold Cabrini and in a positive light.

Certainly, identifying cases of interest has allowed Cabrini to realize that there are some patients that we don't manage very well. Complex mental health patients, even if they're just coming in, you know, for gallbladder surgery or a colonoscopy, are not managed well at Cabrini. And so we've had to ask ourselves, should we be admitting these patients when we know we don't have the resources to manage them if things go wrong? And we've decided no, as a private hospital, we can't offer the services that this patient the comprehensive services that this type of patients really needs. And so we encourage the clinicians to take those patients, often to a public hospital, dare I say it, that has the resources and skills to manage that type of case. I think that's an appropriate improvement in the quality of care that we can provide. Is it cherry picking? I don't think so.

MIC CAVAZZINI:               No clinician chose their path because it's easy. They want rigorous feedback. And even if they're not working at Cabrini, or Sydney Adventist where you're also partnering, and they don't have this architecture that you're building for them, they could they could easily use existing platforms and EMR data to start a culture of this themselves. Right.

TIM SHAW:        David and I have talked about this a lot. I mean, we think probably one of the most useful outputs of this project, although there may well be technical outputs from it, is actually a guide, a “How to”. So if you're a hospital, how do you go about this?

A final comment I'd make as well. I, like David, have been working in this space for a long time, for over ten years or more in terms of looking at this area. And I think this project’s the right time now. There's many of these things are falling into place. So we've got access to better data, more data. We've got a different breed, I guess, of people like David, that are interpreting this data. We've got the Professional Performance Framework coming into place. The stars are aligning around this a little bit.

But I think it could be—I still see that we have a very delicate flame that David's holding in his hand, that could be stamped out, right? If we don't get this right. If it becomes about the policeman, you know, it could just go into the other line of “Okay, it's another thing where people have to do it.” But I think we've got a number of the stars aligning to allow this to actually really be productive as long as we build the trust. And we continue to build that information and we continue to work with the people like David to really understand how you do this effectively.

DAVID RANKIN: I guess the other way to look at it is at an organizational level, the outcomes. I mean we’ve got 22 operating theatres and our main hospital here in Malvern. We're opening two more in April. Those two new theatres are already fully subscribed. And we haven't met the demand. So how are we scaring surgeons off? Evidence is no.

MIC CAVAZZINI:               Many thanks to Tim Shaw and David Rankin for contributing to this episode of Pomegranate Health. The views expressed are their own, and do not constitute the opinion or advice of the RACP.

To follow up on any research cited in this interview, please go to racp.edu.au/podcast and click on episode 92. There’s a full transcript there embedded with links and a shortcut to resources that explain the MyCPD framework and help you meet requirements.

You’ll also find music credits, and a thankyou list of the reviewers who provided feedback on early drafts of this podcast. Please share it round with colleagues, and if you have any comments to make, you can post them to the website or send them directly to me via the address
podcast@racp.edu.au

Thanks for listening. I’m Mic Cavazzini and this episode was produced on the unceded lands of the Gadigal people of the Eora nation. I pay respect to their elders past and present, and their ongoing connection to the country I’m fortunate to share.

 

 

 

Comments

Be the first to comment on this Podcast!

Thank you for posting your comments

13 Dec 2024
Close overlay