There is a saying in many professions that three of the most dangerous words used are ‘in my experience’ as relying on it can cause big issues due to personal biases and general ignorance. However, I also think there are another three words being used a lot recently that can be just as dangerous, these are ‘research has shown’!
Over the past few decade’s healthcare has been trying to move away from individual experience-based practice to a more scientific evidence-based practice, in an effort to reduce clinicians biases adversely affecting patients care and treatment. This is true for physiotherapy where clinical experience has been in the past, and still is today, very much respected and revered despite patient outcomes being quite poor.
Now, before you all go crazy in the comments section and start accusing me of ignoring the role of experience or saying its completely useless, I am not. Clinical experience is useful for many things, in fact, I think for some things it is probably the most reliable tool we have (ref). However, clinical experience has issues and should not be solely or fully relied upon. History is littered with examples of where experts have been wrong and huge mistakes have been made because they relied on their experience (ref).
Fortunately, evidence-based practice within physiotherapy is slowly being adopted, and there is a shift of more and more physios reading, engaging, and participating with research which is a good thing. But, there are issues here as well and it needs to be recognised that using the evidence to help guide our practice can also have just as many pitfalls as using experience if it’s not implemented carefully and sensibly.
Unfortunately, I see and hear many physios using research and evidence not so carefully and not so sensibly. In fact, I see many physios using research much like a drunk uses a lampost, that is they use it more for support than illumination. Many physios, (and other healthcare professionals) like to find research to support what they already think and justify what they already do, or want to do.
This is NOT evidence-based practice! Many healthcare professionals think that if they find a research paper that has been published showing a positive effect of a treatment ‘working’ (usually with a p-value of >0.5) then they are evidenced based clinicians. They are NOT, and this is NOT how evidence-based practice works.
Evidence-based practice is the practice of using the BASE of the evidence to support our methods, treatments and interventions. It’s not using just one, two, or even a few papers. If you base your practice on one, or two, or a few research papers it’s usually because you have only read one, two, or a few research papers.
Many healthcare professionals tend to cherry-pick the evidence-base, using what they want, and ignoring what they don’t want. Most use Pubmed like they use Google, that is they do a quick keyword search, often don’t go past the first page of results, and click on the first link that catches their eye. This means they find often what they want to know rather than what they need to know. It also means they get skewed views and beliefs about what they think works, and what they think doesn’t.
One of the issues with the evidence-base is that there are literally papers published to support anything you want, especially in the field of physiotherapy which is notorious for publishing low-quality poor research (ref). For example, you can find papers showing how woolly pants cure low back pain, ultrasound applied clockwise is more effective, and even spinal manipulation reverses death. There is literally citable research out there to support what you want, or don’t want, such as K-tape helps, or it doesn’t, manual therapy helps, or it doesn’t, even exercise helps, or it doesn’t.
Also, there is literally a shit tonne of shit research out there. This quagmire of turd being produced daily means you do have to wade waist-deep through the crap to find the good stuff, which is time-consuming, frustrating, and hard work and not many can, or want to do it. This often means good quality research can be hard to find and does go unnoticed, whereas bad research is very easy to find and often gets promoted.
Many say that research is broken because of this, but that’s nonsense. Research isn’t broken, it’s just very hard to do well, and often it’s abused and misused by those who don’t understand it. Research is simply a tool, and like any tool, it’s only as good as the person using it.
Unfortunately, levels of scientific literacy and understanding are terrible within the general public, with most people not able to tell the difference between good quality, rigorous, ethical research from poor quality, flawed, biased research. And many healthcare professionals are not that much better either.
It still amazes me how many healthcare professionals hold a Bachelor of Science degree yet couldn’t tell you the difference between specificity or sensitivity, reliability or validity, efficacy or effectiveness, statistically significant or clinically meaningful. And don’t get me started on many clinicians not understanding the role of p-values, effect sizes, blinding, control groups, randomisation, power, publication bias, data mining, p-hacking, and the reproduction crisis.
Some really useful resources for better understanding of all the issues I’ve just mentioned can be found here and here, also check out the Science Daily website and the Everything Hertz podcast as well as I often find these good resources to improve your understanding of research, statistics and the basic scientific method.
The other issue I find with evidence-based practice is that many think it will give them the answers to the messy and confusing questions they have in how they should help and manage people with pain and with pathology. It won’t, if anything the research and evidence can make things harder and more complicated.
A common misunderstanding and frustration of evidenced-based practice is that it will give clear and definitive yes and no answers, or simple do’s and don’ts. It can sometimes, but often it doesn’t. Research never really proves anything true, right or correct. Research only tells us what’s more probable, more likely and less wrong!
Research should actually give a clinician an appreciation of uncertainty and an ability to recognise the probability of what’s less wrong and what’s more right. But only if they’re able to think critically, have scientific literacy, and be tolerant of uncertainty, which unfortunately many are not.
Many healthcare professionals lack tolerance to uncertainty as they don’t want to appear ignorant or stupid which uncertainty can make you appear. No patient wants to hear or see a dithering dallying clinician stuttering and stammering scratching their head wondering what to do next.
A lack of tolerance to uncertainty is also due to societal pressures and deeply rooted constructs that healthcare professionals should always know what to do when patients come to see them. It is still assumed by many that the clinician alone decides what to do and how to proceed rather than it being a shared process between the patient and the clinician.
What often happens due to these issues is that many clinicians hide their uncertainty by abusing and misusing the research as mentioned above. They go and find some research, no matter the quality, that justifies what they do, or want to do. This saves time and avoids them having to have those difficult and awkward discussions with patients about ALL the treatment options and ALL of the pros and cons of them.
As I said at the beginning there is no doubt that clinical experience can be useful and important in some situations, but using it alone is fraught with problems and issues. But, there is also no doubt that relying on poorly conducted, biased, and methodologically flawed research has just as many problems.
Healthcare needs to be careful that phrases like “We know this works” or “This is what we have always done” are not mindlessly replaced with “the evidence says” or “research has shown”. Healthcare needs to try and improve levels of scientific literacy in all its professionals, as well as trying to reduce the amounts of poorly conducted research published or referred to.
In fact, I would say any that any research trial that doesn’t show adequate blinding, sufficient power, and more importantly doesn’t have a control, sham, or placebo comparator is now simply ignored.
As always thanks for reading