I have a confession to make. I don’t use that many outcome measures with my patients! I know admitting this may be shocking, almost sacrilegious to some evidence-based clinicians, and it may leave me open to some criticism. But before you do, hear me out as to why I think outcome measures don’t help most patients or therapists that use them.
Outcome measures are simply something that quantifies and records a dependent variable, such as pain or function, that is being affected by an independent variable such as treatment and interventions. There are literally hundreds of outcome measures available in musculoskeletal medicine and they come in all different variations and varieties
Some outcome measures are performance-based such as the 30 second Sit to Stand Test. Some are observer reported such as the Functional Movement Screen. But many are patient self-reported outcome measures, often abbreviated to PROMs that are either area-specific such as the Shoulder Pain And Disability Index or generic such as the EQ-5D.
There is no doubt that outcome measures have helped healthcare move away from eminence based practice which relied upon biased and unreliable observations and personal opinions of what clinicians thought did or did not work, towards more reliable and unbiased ways of recording what does or doesn’t work. However, I think many outcome measures in clinical practice today are not used well and offer very little for patients and clinicians.
The Early Days
From the start of my career as a physio, I was taught to use all varieties and variations of outcome measures with my patients to ‘prove’ the effectiveness of my interventions and justify what I was doing. And that’s exactly what I did for many many years, but these days I hardly use any outcome measures.
You may be thinking, how do I know if what I am doing is effective or working. You may also be thinking that I’m a huge hypocrite for always going on about the need for more evidence in physiotherapy, yet here I am saying I don’t collect evidence of my interventions with my patients.
Well, your right this is contradictory behaviour, and if I am being honest this is not the first time I have been a big old hot mess of contradiction. I do think the physio profession desperately needs more evidence to demonstrate what it does or doesn’t do is effective, I just don’t think my day to day collection of outcome measures with my patients helps achieve this in any way shape or form.
I don’t think most outcome measures are useful for most clinicians in normal everyday clinical encounters for a number of reasons. First, a lot of the data collected with many outcome measures is rarely collated, analysed, or used for anything useful. Secondly, most outcome measures create frustration and unnecessary barriers between patients and clinicians. And lastly, outcome measures don’t actually measure what many think they are measuring.
Wasted Data
Unless involved in research or a clinical trial most data obtained on outcome measures is not used for anything meaningful. Most data collected in outcome measures is collected by clinicians on orders of their managers, insurance companies, or other professional organisations often for the sake of paperwork purposes, legal documentation, or just to be seen to be doing the right thing.
Outcome measure scores mean very little to patients and most outcome scores are not used to check if patients are actually getting better. Instead, a lot of outcome measure data is used to rate and assess if clinicians are effective rather than if patients are improving.
This is not what outcome measures are designed for. I have been appraised many times throughout my career by managers checking my patient’s outcome scores. I have even been given specific outcome measure targets to achieve in some clinics I’ve worked at.
There’s an old saying that goes “as soon as a measure becomes a target, it ceases to be a good measure”. This is because when a measure becomes a target our biases and errors increase during the recording of the measurements as we attempt to obtain the target.
This affects both the reliability and accuracy of the outcome measure and gives us skewed perceptions and results of treatments and clinicians’ effectiveness. There is also another saying that goes “what is mismeasured is mismanaged”. Meaning that misusing outcome measures to rate clinicians rather than patients often leads to patients being mismanaged.
They Don’t Do What You Think They Do
Many clinicians collect outcome measures in the mistaken belief that they prove the ‘effectiveness’ of their treatments and interventions. They don’t. Outcome measures measure patients’ outcomes not the effects of treatments or interventions given to them. That sounds bloody confusing so let me clarify it a little more.
Basically, there are many things that can affect a patient’s outcome other than just the treatment they receive. Things like time, beliefs, expectations, reassurance, placebo, and many other factors which I have discussed before. Outcome measures therefore also ‘measure’ the effects of these things as well, not just the effects of treatments received.
A common misunderstanding made by many clinicians is that when an outcome measure score improves or worsens it must mean their treatment was the cause of this. It doesn’t. Just because a patient improves, or deteriorates after treatment it doesn’t mean it was because of the treatment, as patients can and do improve and deteriorate despite the treatment they receive. (ref)
Many also misinterpret patients’ responses recorded in outcome measures. For example, the Pain Self Efficacy Questionnaire (PSEQ) is often used to assess patients’ fear of movement when doing tasks when in pain. However, the Pain Self Efficacy Questionnaire only measures a patient’s confidence to do the tasks described in the questionnaire, and this doesn’t transfer well to other tasks.
Just because a patient scores high in self-efficacy to do household chores on the PSEQ when they have back pain, this doesn’t mean they will have high levels of self-efficacy to do kettlebell swings or bird-dog exercises during their rehab. It’s important to remember that outcome measures only measure what the outcome measure is measuring, they do not inform us about other things.
Another factor to consider is that patients may fill out outcome measures based on information they have to recall from the past which may be poor and inaccurate. They may also answer questions in a specific way due to concerns about the effects or impact their answers may have on future treatments or interactions with clinicians.
Unnecessary Barriers
Outcome measures also create a lot of unnecessary barriers to patient-clinician communication and therapeutic relationship building. I often see a lot of confusion, frustration, and apathy in patients as they struggle to fill out questionnaires and answer a long list of boring meaningless questions, or try to give a simple single word or numerical score on something that is complex and complicated.
For example, does the number 7 really explain and describe the severity, intensity or frequency of an individual’s experience of back pain? Does ‘sometimes’ or ‘occasionally’ on a tick box really convey the frustration and anxiety someone has with their sleep being disturbed?
Some outcome measures can also take 15-20 minutes to complete, and often the questions mean very little to those filling them out. A classic example I come across is with the Oxford Shoulder Score (OSS) which I have to use a lot in my current role. There is a question in this about the level of difficulty combing or washing hair which always gets bald men rather annoyed and perplexed!
Outcome measures can not and do not capture the complexity or uniqueness of individuals’ pain and disability well, yet many clinicians use these measures to identify patients’ problems and disabilities instead of actually asking them what their actual problems and disabilities are.
Just because someone says they have a lot of difficulty washing their hair, or a lot of back pain when doing household chores on a questionnaire, this does not mean these are meaningful goals for them to achieve. They may be far more concerned about not being able to play with their kids, sleep through the night, or do their job far more than they are concerned about their doing their hair or household chores.
Not Completely Useless
However, although I’ve been very critical of outcome measures and think they are not used well and offer little for most, I don’t want you to think I am completely dismissing them, or that I don’t collect any information from my patients at all.
As I have said I think most outcome measures are often best suited for research purposes and clinical trials. However, there is one outcome measure that I use regularly with a lot of my patients which is validated, quick, and simple, but more importantly, doesn’t create any unnecessary barriers or frustrations with my patients, and this is the Patient-Specific Functional Score (PSFS).
The PSFS involves asking the patient what are 1 to 3 things that are important and meaningful to them that they are having difficulty with currently because of their pain or issues now. This could be anything from brushing their hair to playing tiddlywinks, it doesn’t matter, it’s up to the patient to decide, not a bloody questionnaire.
They then rate how difficult it is for them to do these things on a simple scale from 0 to 10, with 0 being impossible, and 10 being no problem at all. These tasks are then reassessed at various time points throughout their treatment to see how progress is or is not going. I find this outcome measure to be one of the most simple, meaningful, and least obtrusive ones out there and one that I think should be used often in clinical practice.
Summary
So there you go that’s my quick look at why I think many outcome measures suck in physio. To summarise…
* Don’t use outcome measures to measure a clinician’s effectiveness
* Don’t think outcome measures only measure a treatment’s effectiveness
* Don’t ask bald men to fill out the Oxford Shoulder Score
* Do use the Patient-Specific Functional Score more
As always thanks for reading
Adam
Patient reported outcome measures/surveys are just driven by the payers to try to justify (not) paying you (at least my perception in the US). Every few years Medicare comes out wkth new measurements program that they say will tie to more/less payment for better outcomes, yet these measures are reported by the clinicians/biznesses they would be paying, so… What do you think is gonna happen lol?
Measures like 6MWT, 5 minute run/aerobic max, 30 sec sit stand, TUG, DGI/FGA, and other “functional” measures have some validity I think to help patients buy into what they may need to work on, starting dosage, periodic assessment, etc. Checking “strength” via dynamometer or even just weight machines, dumbbells, barbells is good too and I think ties easily into subsequent treatment prescription and one can see if some sort of progression is happening easier.
Subjectively, RPE/RIR for exercises (definitely use, recording last set of the lift personally but asking after each set to help determine weight selection), maybe total session (personally rarely use), and maybe even a Life RPE (“How hard is life overall for your right now?” Lol, taken at beginning of session, I think I might start doing that one).
Those are my thoughts. Peace out brother! Go Niners!
Nick
Love that Nick… I talked about a life RPE with Greg on the last NAF Physio Podcast! I think it could be a good PROM
Thank you Adam. Very clear. Must admit, in clinic, for a good while, I’ve tended to ask pts what it is they want to see helped and still get surprised by some of their thoughts. Also, even if we don’t hit their preferred target, they may often find other changes occur that they are pleased with instead.
Lol. I had just done my license renewal for Canada. We have to list what we learnt in the last year. With great trepidation, I listed I learnt I need to focus more on what matters to patients and less on outcome measures and numbers. A heretic! I’d interviewed an 85% total body surface burn, about her experience and what she wished HCP knew. My take home….I’d gone too far, caring about sensitivity and specificity of the measure…and needed to get back to basics… and what I should have seen all along….what matters to the patient in front of me. Patients do not care if their elbow range improved 20 degrees, they care if they can eat! Function! So thank you for the blog. If I get audited by my association, for not using more outcome measures, I will send them your blog…..As always, thank you for making us think.
Ha, thanks Alex, I think however you may get into more trouble rather than less if you send them my blog! All the best Adam
Misuse and misrepresentation of outcome measures suck!
1. If data from outcome measures is not used that is the fault of the person collecting the data, not of the outcome measure
2/3. If the wrong outcome measure is selected b/c it is too long, or confusing (creates a barrier), or b/c it measures the wrong construct (doesn’t measure what you think) that is not a failure of ‘outcome measures’ it is a failure of the person selecting it
– Outcome measures can be used for the wrong purpose, this is not a failure of outcome measures
– Misinterpretation of outcome measurement data is also not a failure of outcome measures
– No one suggests that outcome measures should replace a conversation with a patient, nor that they assess everything that might matter
– PSFS is a PROM – what you are saying is that people should just use your favourite PROM
Systematically collecting outcome data might be useful for:
a. overcoming a clinician’s recall bias in the event that they use recall of their past patients to inform management decisions
b. demonstrating progression for motivation and feedback for patients
c. checking that patients are progressing in the way expected given the treatment and prognosis
The onus is on the user to select the right outcome measure, and use the information appropriately.
Data aren’t perfect but the question is whether they do the job better (a, b,c above), or provide a useful adjunct to a clinician’s recall of how their patients are going.
Hi Steve, thanks for comments, I flattered you even bother to read my ramblings. I understand what you are saying and I think and/or hope that is the point I was trying to convey. Outcome measures themselves are not the issue, its the poor, incorrect, or meaningless use of them, and also in my opinion a lot of them are over complicated and poorly designed. As a day to day jobing clinician I have gotten PROMS apathy and frustration when I see no benefit for collecting the data to myself or patients. This is my main issue. I have no problem with collecting relevant meaningful data IF I can see it having a positive and meaningful affect on my practice or my patients… often I dont! And lastly I have NO control over many of the PROMs I am asked to collect, this is usually dictated by management or insurance companies and often I believe incorrectly!
Cheers Adam, appreciate you interacting.
My problem is that the message from the post (outcome measures suck) is not the same as what you are saying here (outcome measures themselves are not the issue).
I get that frank, bold statements are your style, but I’d ask for some more thought and perhaps some responsibility for probable interpretation by people that follow you. You have a platform and I suspect that what you say will be used as a license to bypass using validated outcome measures at all.
Impeccable timing Adam. In a few hours, I will be sitting in a meeting discussing this very topic. For the last 4-5 years, our hospital has heavily influenced our clinicians to collect the ODI, LEFS, Quick Dash, NDI, and NPRS for the purpose of getting rehab practitioners accustomed to using them. At least 50% of our clinicians have voiced their opinion on how much of a meaningless waste of time they are to collect. I on the other hand have embraced them and use the ODI, STaRT Back, OREBRO, and PSFS in a Spine Care Center setting. For me, the value of an ODI is that it can be compared to the variable treatment styles of clinicians in our outpatient rehab facility. I have been able to better visualize the outcomes of passive vs active based treatments. Patients exposed to more passive based treatments appear to deadlift (or farmers carry) less than clients treated with an active treatment. The STaRT Back tool provides me with information on what topic to educate my clients on while reducing the chance of overutilization of my services. The OREBRO gives me a clue on which component(s) in the biopsychosocial model I need to focus on first to try reducing my clients future disability. The 10 m GAIT speed gives me an idea of their risk for falling and overall health. The readiness ruler gives me an Idea of how to guide my motivational interview to set appropriate goals founded on the PSFS. I make it a point to thank my patients on providing all this information since I believe it will help guide me to provide the best plan of care for them.
This has been my mantra for several years now:
“Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” – H. James Harrington
So now my question for tomorrow’s meeting is, “do we keep having clinicians continue to push self reported outcome tools to patients until they find purpose, or do we not bother with using them?” I personally feel that the PSFS and the GROC are the 2 most important outcome tools for the client while the others I mentioned at the very very top of my response are best for comparing treatment techniques, listening skills, and high vs low value treatment selection between clinicians. Being able to identify which clinicians shine at patient reported outcome measure changes may help others replicate the behaviors of these champions – but that promotes a clinician centric environment rather than a patient centered one.
Thanks to your blog, I am more insightful to the lack of value many clinicians think towards these outcome tools and that quantifying goals is more important than region specific outcome measures.
Ideas to share with my team and supervisor:
Having the clinician or client graph progress until a goal of being able to achieve 150 minutes of moderate, or 75 minutes of strenuous exercise/activity each week.
Maybe having cardiac patients track pushups until they can achieve 40 consecutively.
Having patients track gait speed or like Nick mentioned above, grip dynamometry, and even 5 rep max.
a
Well thanks to you, I may have too much to say tomorrow afternoon!
Wishing you the best.
Thanks for the comments Ken and all the best with the meeting!
Hi Adam, your recent blogs suggest a softening towards an ‘evidence informed’ approach to patient management. It’s where I’m happiest ! BW Gary.
I completely agree with these assessments. I have found my views shift dramatically over the past three years. I used to be of the mindset that these measures were THE marker of clinical effectiveness. For the reasons you have mentioned, that belief is gone I do think they carry value, primarily in the research space as you say, but also in overall perceived improvement of the patient when paired with other measures. Although, the value is limited and dependent on the type of measure used (like the PSFS). Another problem with these measures is they rely on recal (which is poor), are a snapshot in time, and are heavily biased on the patient side too (if I purposely score low may I can score a massage or hot pack).
The other challenge is the current reimbursement model in the US. Insurance companies and case managers track outcomes and determine visit approval based on the outcome data. As you say, this can lead to biased collection and reporting. Basically, it is a giant mess. In the end, patient care is never black and white. We cannot rely on single tests or measures to determine success. Thanks for sharing this. I always appreciate your candor and refusing to dance around issues.
I appreciate you bringing this issue up that in my mind ties in well with the recent blog post you offered on contrived complexity in the PT profession. The obsession with quantitative measures may be more about the chronic attempt to quell anxiety in PT as it tries to justify its own existence by doing something, anything, that supports the image of “fixer”. Throwing spaghetti on the wall to see what sticks works the same way.
Thank u for the tought provoking contents Adam, in my place,we have to use pain scale as an outcome measures for every msk patient (except chronic patient) ,meaning that for every patient visit,there must be an improvement in patient vas which makes me wonder, am i a pain therapist or physical therapist..