Ep40: Rebooting CPD Part 2—Feedback and Audit

Ep40: Rebooting CPD Part 2—Feedback and Audit
Date:
26 September 2018
Category:

From 2019, there will be only three categories of activities in the RACP's CPD framework, encouraging Fellows to participate in performance review and outcome measurement alongside more traditional educational activities. Performance review can include collegiate exercises like peer review of case outcomes, or surveys of patient experiences. Multi-source feedback is one sophisticated example that has been trialled by the RACP. Outcome measurement typically refers to clinical audits of case notes and there are many forms that can easily be implemented by Fellows.

In this episode, two New Zealand Fellows discuss what they've learned about this 'strengthened CPD' approach since it was nationally implemented there four years ago.

Fellows of the RACP can claim CPD credits via MyCPD for listening and using resources related to this episode.

Credits

Guests
Professor Tony Scott FRACP (Director of Cardiology, Waitemata Cardiology, Auckland)
Dr Peter Roberts FRACP (CPD Director, RACP New Zealand; Wellington Hospital)

Production
Written and produced by Mic Cavazzini. Recording assistance in New Zealand from Charlotte Graham-McLay. Music courtesy Gunnar Johnsén at Epidemic Sound ('Task at Hand 2', 'Task at Hand 5', 'The Sky Changes 2') and Blue Dot Sessions ('Vittoro'). Image licenced iStock. Executive producer Anne Fredrickson.
Editorial feedback for this episode was provided by RACP members Phillipa Wormald, Michael Herd, Rhiannon Mellor, Joseph Lee, Rachel Williams, Phillipa Wormald, Paul Jauncey, Rebecca Grainger, Philip Gaughwin and Alan Ngo. Thanks also to RACP staff Lianne Beckett, Michael Pooley, Elyce Pyzhov, Amy Nhieu, Shona Black, Abigail Marshall, Kerri Brown, Sandra Dias and Carol Pizzuti.

References

Key Documentation
Professional Performance Framework [MBA]
Recertification and continuing professional development [MCNZ]
Expert Advisory Group on Revalidation: Final Report [MBA]
Recertification, the Evidence for Change [MCNZ]
RACP 2019 MyCPD Framework [RACP]
FAQ: MyCPD Changes [RACP]
Creating a professional development plan [RACP]
Multisource Feedback
[RACP]
Regular Practice Review
[RACP]

Other Resources
Pomegranate Health Audio Appendix, Episode 37 [RACP]

Academic Papers
Accuracy of physician self-assessment compared with observed measures of competence [JAMA]
Feedback data sources that inform physician self-assessment [Medical Teacher]
Promoting physicians' self-assessment and quality improvement: The ABIM diabetes practice improvement module [JCEHP]
Toward authentic clinical evaluation: Pitfalls in the pursuit of competency [Academic Medicine]
Audit: How to do it in practice [BMJ]
Clinical audit: Still an important tool for improving healthcare [ADC Education and Practice]

Audit Guides
RACP Curated Collection: Clinical Audit
Clinical audit and peer review ideas
Non-clinical audit and peer review ideas
Best Practice Advocacy Centre Audit Templates [BPAC]
Clinical Audit Guides Videos [Medical Council of Ireland]
A Practical Guide to Clinical Audit [Pre-Hospital Emergency Care Council Ireland]

Reviewing Performance Activities
Creating a professional development plan
Multisource Feedback
Regular Practice Review

Transcript

MIC CAVAZZINI: Welcome to Pomegranate Health, a podcast for physicians of the RACP. I’m Mic Cavazzini, and this episode is part two on the reboot of CPD frameworks for Australian doctors that are taking place over the next two years. Today we’ll talk about the design and the value of certain CPD activities that will be required. Go back to the previous episode for the backstory as to why the Medical Board of Australia is following the trend set by regulators in New Zealand and around the world.

Just to recap: the MBA is raising the bar for the kinds of activities that must go into continuing professional development. The RACP has responded with a modified MyCPD framework that has only three categories of CPD activities instead of five. Category 1 includes all the educational activities that used to be found across the framework. Category 2 is known as ‘Reviewing Performance,’ and includes, for example, practice feedback from peers and patients. Category 3 is called ‘Measuring Outcomes,’ the most familiar example being clinical audits of medical charts.

Physicians need to distribute their time on CPD activities from all three categories. This is deja vu to New Zealand Fellows, and today I’ll talk to a couple of them about can be learned from this process.

You might be asking, ’Why are such tools even necessary?’ Well, the academic literature shows that doctors are not great at assessing their own performance. A 2006 systematic review in JAMA reported that of the 20 studies comparing self-assessment to external practice review, only 7 of them reported a positive correlation. Doctors appear to pick and choose the data they use to self-evaluate and typically overestimate their own adherence to standards of care. Worryingly, it’s often the least competent clinicians who are most off base in their self-assessment.

TONY SCOTT: I think that’s human nature, really, and the longer a clinician’s been in practice the longer that disconnect becomes. And I think that, when combined with the natural changes that occur with aging in all of us, it does create a situation where a degree of insight is really not there.

MIC CAVAZZINI: That’s Tony Scott, clinical director of cardiology at Waitemata Cardiology north of Auckland. He has been leading the College’s investigations into the value and feasibility of tools like regular practice review and multi-source feedback.

TONY SCOTT: Many of us, as senior physicians, work in isolation, and so we tend to take the feedback that is available to us, which is that our patients do tend to get better. And we may not be aware how their responses to the treatments and the care we give them is different from other people's, so I think the issue of external feedback and independent feedback is quite important.

MIC CAVAZZINI: Yep. There’s a whole list of different activities under this in Category 2—Reviewing Performance. You can just ask a colleague to shadow you on a ward round and watch a consultation that is being taped with the approval of the patient. That’s especially useful, let’s say, when you might’ve come back from a long period of leave and might be a bit rusty. I imagine a lot of listeners undergo these kind of case reviews in an informal way in their team or across the hospital, and probably aren’t aware that it ticks the boxes that are expected by the Medical Council or the Medical Board.

TONY SCOTT: Yeah. I think that is something that we need to evolve further, I think that’s a very valuable process. I’ve been involved in some of those myself where I’ve, for example, sat in on a clinic in an afternoon and just been a fly on the wall, and then at end of that fed back to them my observations of what I was seeing. And universally, though, people find this a little bit—if they haven’t done it before—find it a little bit intimidating to think about. Universally they appreciate it afterwards. It’s valuable for both parties, I mean, it’s interesting for me to see how another clinician actually conducts their business, so to speak, how they interact with patients, and I’ve certainly picked up things from it. And they also appreciate the feedback, particularly physicians who are working in more isolated settings where they may not have the benefits of interacting with colleagues on a regular basis.

And I know other departments that I’ve been involved with or reviewed have different systems with regard to their handover process where the outgoing clinician, if you like, rounds with the incoming clinician for the following week, and they round on all of the patients on that day. And that serves the purpose of providing a handover, a very good handover, but it also provides a feedback opportunity because they will actually see what the other clinician is doing.

MIC CAVAZZINI: In the academic literature they say that when people know they’re being observed one of the concerns potentially is the ‘Hawthorne effect’—the chance that they will put on their best show because they know they’re being observed. But another example of this kind of practice is to just have peers review your medical charts or your discharge letters. Is this something you see often, and how reliable is that in picking up behaviour?

TONY SCOTT: I think it has some value, or has significant value, but it has to be couched with an appropriate reference system, and I think that the study you’re referring to [Goulet 2007, EHPRunciman 2012, MJA] they were quite detailed in terms of how they structured the reviews and how they trained the reviewers, and when you do that then you get a very good correlation. But I have used a tool previously, the Sheffield Letter evaluation tool, but that had some limitations and it was difficult to apply to a highly-refined specialist setting. There are specialities that don’t involve direct patient contact to the same extent, and so it’s important for them to have the scope to do this as well, because they do participate in quality improvement activity and quality assurance activity in terms of their practices and systems, and those things are definitely recognised.

MIC CAVAZZINI: And then there are another couple of examples that listeners will be very familiar with, that they’re participating in every day—for example, discussions about critical incidents of quality and safety. Listeners might’ve been involved in peer review of journal articles or teaching activities. There’s no real hard and fast threshold of what the MCNZ of the MBA will accept, it doesn’t need to reflect on your practice every single time?

TONY SCOTT: Yeah. I think the basic principle is that it should be activity that is aimed at improving quality, whether that’s in your own practice or in your environment in the service you’re in. For example, our coronary cath-lab has a system of ongoing monthly review in which they look at all cases in which there has been a complication and go back and review why those complications have occurred, what corrective action is needed, et cetera. And it’s just done routinely, it’s part of business as usual. So, it will involve an individual to a sense, in that because I participate in the cath-lab all of my procedures and complications are noted and fed back to me if there’s a particular issue. But it’s not specific to me, it’s not my practice. Now, they were quite happy with that. MCNZ see that as a quality improvement activity and quite a valid peer review activity.

A number of our cardiac interventionists in our institution have set up a regular process where a log is kept during the case, so they have the CINE pictures that are taken and are recorded digitally, and our chief radiographer will select a case random and they will then go through the case and the peers will review what’s been done and feed that back directly to the operator. This is done in an in a formative way, it’s definitely a supportive process, but I think it’s a very good way of actually introducing peer review into your regular practice.

MIC CAVAZZINI: Now let’s look at the multi-source feedback or 360° appraisals. Multisource feedback has been road tested in a number of different industries and it’s been fed into the Canadian and U.K. recertification systems, so last year the College conducted a trial of this among 43 of its Fellows, along with six AFOEM Fellows and three overseas-trained-physicians. The candidates each nominated 15 colleagues to fill in feedback forms anonymously. What kind of specific domains did they provide feedback on?

TONY SCOTT: Professional aspects in terms of knowledge; clinical skills, patient and directions; timeliness; self-management; individual perceptions of their own ability and so forth—so quite a wide range of domains. It is very much a professional assessment rather than an assessment of an individual's actual knowledge base, and I think that also makes it a much more translatable, you know, it’s translatable across the subspecialties, across the various fields of medical practice.

MIC CAVAZZINI: How would you choose the right people to invite to participate?

TONY SCOTT: This is something that people often talk about and are concerned about, you know—'What’s to stop me just ensuring that people that I know like me and they’re going to feed back to me in ways that I don’t find too challenging?’ I think it’s actually a matter of looking at the numbers; if you get enough people to provide feedback then those sorts of biases tend to drop out. And I’ve been through the exercise myself and I think it would be pretty hard for me to pick 15 people that I was absolutely certain would provide only positive feedback. Someone who I’ve not necessarily seen eye-to-eye with over the years, it’s worthwhile including those people, because if you don’t then you don’t really gain that much out of the exercise.

MIC CAVAZZINI: And then the candidates also needed to get feedback from 35 patients, and the patient questionnaires were the classic Likert scale about their impressions of the doctor the during the consultation. So in response to the question ‘My confidence in the doctor's ability is…’—they could answer poor, fair, good, very good, excellent. And even on this sort of simple Likert scale, some of the questions seem quite probing and profound, so, for example, ‘The opportunity the doctor gave me to express my concerns or fears was...’

TONY SCOTT: Yes, and they are very important to the patient in terms of what they remember about the interaction.

MIC CAVAZZINI: Do you think it’s possible to get this kind of insight in other ways, or is the multisource feedback a particularly useful tool?

TONY SCOTT: I think it’s hard to get it in other ways, in a consistent way, because you tend to get the information that you’re looking for. There was an occasion where someone who I spoke to was finding some significant challenges and difficulties coping with their work, not necessarily from a professional standpoint but due to other issues, and I don’t recall what they were, health or personal issues. The individual felt that they were still doing their job well, that this was not something that was impacting on them. But by looking at the feedback both from patients and from peers it became apparent that this had been noticed, that the individual's stress was recognised by their peers, and that was a revelation to the individual, quite a striking revelation that they had not appreciated. And so that would not have been highlighted any other way, I don’t think, or not until things had become very much worse.

MIC CAVAZZINI: It’s interesting that the majority of the candidates in the multi-source feedback trial did find that both the peer feedback and the patient feedback were relevant and accurate, and they had the perception of it being objective and constructive.

TONY SCOTT: Yes, sure.

MIC CAVAZZINI: But some of the participants in the trial were concerned that negative feedback could be a tipping point for some fellows who were already struggling. Part of the multisource feedback process requires a debrief to occur, so how does the debrief work, exactly?

TONY SCOTT: The debrief allows for the feedback that people take when they look at it themselves to be mollified and put into perspective. You know, there is a lot of literature around the output from multisource feedback in terms of what it means; how to interpret it; different patterns that are normal; the feedback that are provided by female patients as opposed to male patients, for example; the feedbacks provided by younger individuals as opposed to older individuals; and the differences across first visit versus follow-up visits. So a feedback provider can actually provide evidence-based information back to the individual in terms of how they may modify what they are doing with their patients or how they’re interacting with their patients to actually improve the quality of that interaction.

MIC CAVAZZINI: And then another concern was the time commitment involved in participating in the process itself. So it took on average nine hours of health service time to complete the multisource feedback for each individual. This was the time spent by candidates and colleague raters and advisors. How would you envisage making this a more efficient process?

TONY SCOTT: I think it can be made more efficient. I think that the thing that held us up initially was the actual selection of people to provide feedback. Once the feedback providers are selected in terms of peers then the rest of the process runs pretty smoothly, I think. Because it’s an online thing and people can sit at a desk and actually run through that quite quickly, I’ve done quite a few of them as someone providing feedback.

I think having a standardised way of getting patient feedback is going to be a bit more challenging, but I think that is doable also. I think it is very important that the individual doesn’t select the patients. My personal experience was that the best thing to do was to provide a stack of questionnaires and envelopes to the front desk when I start a clinic and just have then hand them out to sequential patients. The patients don’t actually know they’re doing anything until they come out afterwards, so we don’t talk to them beforehand so that doesn’t really colour their interaction with you initially. And I think in the end this took, in my case it took probably six weeks or so to collect all of the feedback from patients. So you may have been aware of it on the first clinic, but you soon forget, they were just done over a two-month period and I completely forgot about it, I have to say.

***

MIC CAVAZZINI: Narrative feedback is of course qualitative—there’s no other way to gauge concepts like cultural competence or teamwork. And in a study titled ‘Pitfalls in the Pursuit of Competency’ the authors from the University of Toronto show that it’s hard to isolate one domain from another. They’re all wound up together in our judgement of someone’s personality and their behavior in a given environment.

But the authors write that ‘in the setting of clinical teaching units, a more subjective approach to evaluation may actually be desirable. In an effort to objectify in this setting, we risk the loss of authenticity. We measure what we think is important, simple, and feasible, but we may have stripped away too much and may not be capturing the essence of what it means to be a good doctor.’

As we heard in the previous episode, using a variety of tools will help focus the physician on the whole range of professional skills, and it is possible to quantify some of the structural and process related aspects of daily practice using clinical audit. Any number of questions can be addressed with this technique. For example, ‘What are my prescribing patterns for a given presentation and how well do they match the gold standard-of-care?’ ‘How many of my patients attend follow up appointments and what might explain the dropouts?’ A huge amount of information can be gleaned just from case notes and administrative records.

Here to tell us about how New Zealand Fellows have been making the most of clinical audit is Peter Roberts.

PETER ROBERTS: I’m Peter Roberts, I’m a general physician at Wellington Hospital, my previous experience has been in intensive care. I’m also the CPD Director for the RACP in New Zealand. We in New Zealand have had audit and peer review as a requirement by the Medical Council over the last five years.

MIC CAVAZZINI: So the MBA is calling this, very broadly they call it ‘Measuring Outcomes.’ I’m going to put you on the spot here, can you proclaim how the MCNZ defines clinical audit in its parlance?

PETER ROBERTS: Thank you. Well, their description is an audit is ‘a systematic critical analysis of the quality of a doctor's practice’ and they go on to say that ‘it’s used to improve clinical care and outcomes and to confirm the current management processes to make sure that they’re up to date with current evidence and accepted consensus guidelines.’
MIC CAVAZZINI: And the CPD Unit here is collating all sorts of examples to try and cover all of the specialities represented by the College. So there’s lots of different angles of clinical practice but also behaviour, and that can be audited as well.

PETER ROBERTS: Well, exactly, and one of my audits of which I’m most proud was several years ago now looking at our readmission rate. And what I did was I went through our entire department, de-identified everybody, and just found out how many patients were coming back within 30 days. And then looking at that I took up a tool from the IHI Group in Boston, which is to ask questions about why weren’t we successful in getting this person back into their home and back into what I call, ‘the nest.’ And were they able to stay home because we’ve got the right care package in place. I’d like to say that we’ve taken it up again; it’s time for me to turn the crank one more time. And that’s the thing about audit; it’s that it’s not just the one time that you do it, it’s setting it up in such a way that you get that regular feedback.

MIC CAVAZZINI: I came across some good resources for developing clinical audits from some instructors for the Irish Medical Council, Ian Callanan and Niamh Macey. And Ian Callanan says that a clinical audit is not the same as a research study. A research question is often designed to reveal something new from scratch, you know, ‘What happens to a patient with this condition if we give this drug?’ Whereas an audit is done when we know what best practice is and we’re just wondering whether it’s routinely adhered to. So, what are some of the simple questions you’ve seen in the audits that are submitted to you?

PETER ROBERTS: I often  get calls from colleagues saying, ‘But I don’t see patients, I don’t have anything that I can audit’—and so my role in that case has been to help people do a creative job of coming up with processes that meet the requirements of the Medical Council's expectations and at the same time actually are quality improvement processes.

For instance, one of my colleagues who doesn’t practice a speciality in New Zealand although he’s a top-flight interventionist in other countries, and he had nothing on which to go as a specialist until I said, “Well, do you ever work in your role in your speciality?”, and he said, “Well, about five times during these months that I’m here in New Zealand I’ll see a patient in consultation.” And I said, “Well, what’s the outcome of that, what happens when you do that?” “Oh, I don’t give them any medications, I don’t write any prescriptions, but I write a letter to the GP.” And I said, “Well, what’s the outcome of that?” and he said, “I don’t know”—and I said, “There is your audit.”

So he talked to 10 people, called me back the next day and said, “You’ve changed my life.” He wants to know now that he’s talked to his patients, about how what he’s done is affecting them further down the line. So he’s getting a broader scope of what’s going on, and I think that fits perfectly with what the Medical Council is asking us to provide.

MIC CAVAZZINI: Let’s now go through how one collects this kind of data. So going back to Ian Callanan, he said that an audit doesn’t need to be as technical or as rigorous as a research analysis, it’s supposed to provide quick and dirty feedback. So you don’t need to think about whether you’re doing a case-controlled study or a longitudinal study, often you’re just scanning back over case records and tallying particular events. Is that the standard example that you'd see among New Zealand fellows?

PETER ROBERTS: Yes, it can, and that’s a pretty straightforward process. Every organisation now, I think, has somebody whose speciality it is to help people do audit. So as long as somebody is associated with a hospital or a group practice or a clinic of some sort, often it’s quite a straightforward thing to say can you give me all of these parameters, and you tell them what you’re looking for. And they can give you a list of patients in a couple of boxes, and sometimes it’s a lot of boxes filled with the notes of patients where you can go and find that answer yourself.

MIC CAVAZZINI: Let’s maybe go through a worked example I’ve adapted from an article in the BMJ, so they describe this imaginatively named Dr Black who notices that some patients in his clinic with chronic obstructive airway disease had not been started on non-invasive ventilation though it like they were candidates. So first he formulates his audit question: ‘What proportion of patients are being treated according to best practice?’ and then the second step would be to develop a criterion statement that guides how to score the response. How would you break this question down in a systematic way?

PETER ROBERTS: That is where the guidelines come in, and you decide ‘These are the standards we'd like to work towards, are we actually doing it?’ And the real question is, ‘Are we doing it?’

MIC CAVAZZINI: So in this particular case study there might be a line in the sand which is respiratory acidosis defined as pH below 7.35. You know, looking back over case records, did the patient meet this criterion despite maximal treatment. And then, Dr Black reviews notes from the last 50 patients admitted with an acute exacerbation of COPD and he finds that 40 of the 50 patients met the criterion for non-invasive ventilation but only 31 actually received it, so 77.5 per cent.

Now, the next step in the audit process would be to compare performance, and this is where it gets a little bit more interesting; where do you find realistic benchmarks to aim for? So in Alberta, Canada, for example, physicians can sign up to a service where they receive information on their prescribing rates of opiates and benzodiazepines as compared to the average for the province. Doctors in Australia can dig up similar data for their region and the whole country from the Atlas of Healthcare Variation. How have fellows in New Zealand approached this question of benchmarking?

PETER ROBERTS: Well, and every country has that in New Zealand we have the Health Quality Group, and they’re putting out broad range overviews of what we’re doing. The issue is finding out where you as an individual are fitting into that. ‘What is my performance according to these guidelines?’ ‘Why do I divert from what’s been recommended?’ And if we have good reasons to stay the way we are, we do. And that’s really where clinical nous comes in.

MIC CAVAZZINI: Yeah, so there might be a number of explanations; in this example they say; in two of the nine patients who didn’t receive non-invasive ventilation there was no machine available and they were intubated instead; And in two more patients the respiratory acidosis was thought to be ametabolic; in four of the patients the acidosis was noted but it was documented to continue medical treatment; and in one patient by the time the decision was made to start non-invasive ventilation, the patient was in need of intubation. So when you break these down, each of those would suggest a different improvement in practice, wouldn’t they?

PETER ROBERTS: There are resource limitations to what we can do, processes in which the demands are not appropriately met. And sometimes that means that you’re going to have to apply up through the system to get more of that particular device. When you say, well, there were the following numbers of patients that we didn’t use non-invasive ventilation we can say, “Well look, guys, we, the team, are going to do this and this and this in this situation. This particular person is concerned that having the mask on their face is just too much, and we’re going to have to learn how to better put the mask on the patient.” And what you do is you let the patient hold the mask and put it on their face and give them control.

MIC CAVAZZINI
: So, yeah, you talked about resources and then another thing to address might be educating all the staff on the team about interpretation of blood gases and the use of non-invasive ventilation. And finally you'd want to know if these changes have made an impact, so when would you go back and re-audit to see the impact of your intervention?

PETER ROBERTS: You’ve got to give things time to settle in, and also make sure that it’s bedding in not just with one generation of registrars, and our registrars usually change over on about a six-monthly basis. But you go through several generations, and if that process is bedded in and it becomes ‘what we do around here,’ it actually becomes part of the culture of the organisation, and we’ve changed the culture of the organisation.

MIC CAVAZZINI: And there might be listeners who are still scratching their head and are not sure what a good topic of interest might be. It’s been suggested that the Evolve or the Choosing Wisely lists might be a good place to start. Most specialties have an Evolve list of low-value practices that can be reduced in their frequency; so another spot test for you, Peter, can you run off any examples from the Evolve list for internal medicine, for example?

PETER ROBERTS: Well, yeah, I can rattle off here—I mean, the issue is whether or not we need to do a full review of somebody who has an uncomplicated faint? Do we actually need to do extensive cardiovascular and EEG measurements and so forth? One of my colleagues is particularly focused on whether or not we’re measuring the brain natriuretic peptide too much in terms of saying, ‘Yes, this person has congestive heart failure.’ Once we’ve established that we don’t have to go back and measure that again and again and again because whether or not the patient’s responding should be something that we can clinically convince ourselves.

MIC CAVAZZINI: So, again, these the pretty simple things to tally from the patient records. ‘Was the patient admitted with this condition: yes or no? Were they given this workup: yes or no? Can we reduce that in the case of the Evolve list?’

Now, some sort of housekeeping-type questions. There might be several members of the team working on the same audit, and these are often written up as trainee research projects, for example. How does the ownership of this paperwork turn out when different people might be submitting the same findings for different projects or clinical audits or CPD?

PETER ROBERTS: Well, I think if you’re going to line up authors of papers there has to be a statement about how much of this work is your work. But on the other hand, helping design, being hands-on in terms of the delivery of the services that you’re actually measuring, those things are all part of what it is to be a mentor and a Fellow. I think that we’re satisfying the requests of the Council and from the MBA, we’re satisfying what it is that they expect from us by participating, and in some cases we’re one of those who are studied, and sometimes we’re doing the study ourselves. So, you know, you’re either a subject or you’re an observer, and I think both of those things deserve credit. But more to the point it’s taking away that satisfaction that I know better now than I did yesterday how to deal with this particular problem.

MIC CAVAZZINI: That was Peter Roberts ending this episode of Pomegranate Health. Thanks also to Tony Scott for sharing his experience. There are some further outtakes from this interview in an appendix attached at our website, racp.edu.au/podcast. You’ll also hear about regular practice review and the future role for professional development plans in CPD of both New Zealand and Australian Fellows.  For listeners new to these CPD tools, there are plenty of supporting resources as well as all the citations and resources mentioned in the podcast. The Best Practice Advocacy Centre in New Zealand has guides for over twenty different audits you can apply to your practice.

As for reviewing performance, tools like multi-source feedback will remain a voluntary option until the logistics of wide-scale implementation can be further investigated. Keep an eye on the College newsletters for more updates on the rollout of the new MyCPD framework and in-house support. As well as Pomegranate Health, you should explore our educational tools such as the eLearning platform and CLS lecture series.

I’m Mic Cavazzini. I hope you’ve found this episode informative. Please send any thoughts about the show to podcast@racp.edu.au. Bye for now.

Comments


Thank you for posting your comments

20 Apr 2024

Marcus Gunaratnam

Thank you for explaining what is required for new CPD fulfillment,following your advice I came upon 'Sail'Sheffield Assesment Instrument for Letters, which I hope to use for Category3.

07 Mar 2020

En Ye Ong

Thank you for the useful elaboration of what is needed for the new CPD structure. More links in the transcript to various resources for the different peer review/outcome measures mentioned would be useful.

11 Aug 2019

David Lewington

Greetings, Has there been any discussion regarding how the new changes might be implemented in a medico-legal practice?

05 Nov 2018

Close overlay