Like many well intentioned movements, “evidenced based practice” has been hijacked, politicized, and corrupted. While in many instances the spirit behind the movement inspires providers to think more critically and question personal biases, in others it creates division and suppresses dialogue. “Science” and “evidence” are often reduced to rhetorical forms of grandstanding, especially on social media platforms that reward adversarial behavior and binary thinking. To be sure, I am not suggesting that medical providers “do things the way they’ve always done them” or that research and objective data should not inform clinical practice. Research, whose quality varies immensely, is not synonymous with evidence, however. Moreover, it remains unclear what individual or collective bodies should preside over “evidence”. Writing “evidenced based” in one’s twitter bio doesn’t make one an arbiter of evidence, blue check mark or not.
Additionally, a trial’s design dictates the degree to which its findings are generalizable clinically. Physical training and rehabilitation research isn’t as easy to appraise as medical research for a well established pathology or disease state. In the latter scenario, one can simply administer a medication and a placebo to a particular patient population and generally control for confounding variables. Something like “lower back pain”, on the other hand, is much more ambiguous as it is not caused by a malfunction at a single receptor but by a host of biopsychosocial factors. Low back pain is thus much more difficult to standardize as an experimental condition than a pathology with a more obvious cause and effect. Hence producing “evidenced based” treatment guidelines for low back pain is necessarily unsatisfying since the construct itself is dubious. As an intervention, a medication is generally easy to standardize. What about something like “manual therapy”? I am not taking a position for or against manual therapy here but in clinical practice, no two manual therapy experiences are ever really the same, unlike identical doses of a particular medication.
My recent experience on jury duty taught me more about how evidence is evaluated than any powerpoint presentation or schematic pyramid. The burden of potentially sending somebody to prison renders the concept of evidence much less abstract. The accused in the case in which I participated was charged with several counts of stalking and violating a court issued restraining order to avoid any contact or communication with his ex-girlfriend. Since the defendant was alleged to have contacted or attempted to contact the accuser multiple times, he was on trial for multiple counts of stalking and violation of said order. Violating the order was a precondition for stalking; necessary but not sufficient. Stalking requires a specific set of behaviors as outlined by the court.
Unfortunately, the court did not provide us with an algorithm that revealed how to appraise and weigh various forms of evidence to produce a clean, quantifiable verdict. Formulas are comforting because the math assumes our agency and diminishes our sense of responsibility. Instead, we were presented with various pieces of “evidence” whose relevance we had to subjectively weigh individually and collectively. There was no way to evade responsibility for the defendant’s fate. In this case, the strongest evidence came from in the form of eyewitness testimony. One such witness was a coworker of the accused. He testified that the defendant asked him for his cell phone at work so the defendant could text the alleged victim. Another witness was the mother of the family for whom the alleged victim worked as a nanny. This witness testified that she spotted the defendant in the lobby of her apartment building questioning the doorman about the alleged victim’s whereabouts.
In both instances, the testimony didn’t constitute evidence unless the jury deemed the witnesses to be credible. Did either of these witnesses have a reason to lie? The answer is a subjective determination, one that is easier to arrive at when multiple people testify to the same thing than when a single witness makes an isolated statement. Here, the collective testimony made each individual witness more subjectively credible. Additionally, the prosecution alleged that the defendant created alias Facebook accounts from which to contact the accuser. The prosecution established that the defendant writes in a characteristic way and sought to demonstrate that the manner of speaking in the alias accounts matched that from the defendant’s actual Facebook profile, one identified in court by the defendant himself. The content from the alias and actual accounts looked similar enough to me. While it was possible that five different people could attempt to contact the alleged victim using similar catch phrases and while referencing similar shared events, it didn’t seem likely.
Moreover, a representative from the defendant’s phone and internet provider testified that these alias accounts came from the same IP address as the account the defendant testified was actually his. Ultimately the dependent was found guilty of all but one of the counts for which he was charged. This was the only count for which there was only a single data point with no additional corroborating “evidence”. While each of us (jurors) thought it was likely the defendant did violate this particular count based on the plethora of evidence attesting to his guilt on the other charges, we did not think that a single piece of evidence was sufficient to supersede the presumption of innocence.
Similarly, medical providers typically don’t dramatically alter clinical practice based on a single trial but only after a series of trials support the same conclusion. There is no way to definitively know if we made the right decisions, as guilt or innocence in this instance was determined by a collective, subjective deliberation. The only means of evaluating our decision would be to rerun the same experiment with multiple juries and see if they reached the same conclusions. Even this means of evaluation is not objective in a way that is very satisfying.
Nevertheless, the jury system has stood the test of time and while far from perfect, seems satisfying enough. Deliberating over scientific evidence is not much different. Perhaps we should be asking how we can make evidenced practice more like jury duty. Jury duty requires collective skin in the game. Each individual juror is accountable to a defendant with a presumption of innocence and must justify his/her position to other jurors and ultimately to somebody who, if found guilty, might lose his/her freedom. While a patient’s well being should provide sufficient incentive to practice in an “evidenced based” manner, the healthcare system is so perversely overcomplicated and dysfunctional that it can confound clinical objectives.
Medical providers who clearly harm patients with unsafe treatments are typically exposed and removed from the system. The more insidious cases of non-evidenced based care manifest with interventions that are just ineffective but not overtly unsafe. This is where lack of price transparency and third party payers hinder systemic progress. Patients and providers are incentivized to just try stuff because even though ineffective treatments aren’t actually free, the cost is hidden. In physical therapy and rehabilitation, most interventions have a very low potential for harm relative to those from other medical disciplines hence why many practices that drive self-proclaimed “evidenced based” providers crazy remain pervasive.
Patients don’t read PubMed articles but they do vote with their wallets. If a third party payer covered the cost of the televisions we purchased, we’d be less likely to shop around for quality and value and instead settle for the “in network” television. Many people put more thought into their televisions than their healthcare because in healthcare, intermediaries effectively make the important choices for us. To be clear, economic status should not be an obstacle to obtaining adequate (what constitutes “adequate” is a whole other discussion) medical care. That said, we’re much more likely to see better “evidenced based” practices when the cost of the services we receive is visible.
Like a jury deliberation, the market is a collective conversation that reflects attitudes and values about various phenomena. Like jury duty, the market gives people skin in the game by forcing them to be educated consumers. Like jury duty, the market is imperfect. While freeing up the market won’t solve all of health care’s problems, it is preferable to a situation in which large, self-interested organizations co-opt policy makers to stifle competition, innovation, and consumer choice. As Nassim Taleb says in Skin In The Game, it’s easier to macro bullshit than micro bullshit. Price transparency in medicine would presumably eliminate a great deal of macro bullshitting in medicine and create a localized, patient-centric alternative to help audit the “evidence”.
Like this article? You’ll love our free training program (sign up below) & our Movement Foundations Online Course.