English: German Network for Evidence Based Medicine Deutsch: Deutsches Netzwerk Evidenzbasierte Medizin (Photo credit: Wikipedia) |
Source: Evidence in Medicine
Lay of the LandThere have been some recent blog posts around yet again debating the concept of evidence-based medicine (EBM).
This blog is titled "Evidence in Medicine" not because "Evidence Based Medicine" was already taken as a URL and blog title, but rather because I didn't want some of the baggage and implications of the EBM label.
EBM is frequently defined as "the conscientious, explicit and judicious use of current best evidence in making decisions about the care or individual patients", and you would think it would be hard to object to that. And yet....
One blogger, Laika Spoetnik (Jacqueline), recently wrote a piece complaining about the attacks on EBM, and mentioned the humorous parachute example that I had referred to in an earlier post. She wrote about the parachute piece::
I found the article only mildly amusing. It is so unrealistic, that it becomes absurd. Not that I don’t enjoy absurdities at times, but absurdities should not assume a live of their own. In this way it doesn’t evoke a true discussion, but only worsens the prejudice some people already have.
(As I'd suggested in that same earlier post, EBM folks are not necessarily known for their great senses of humor.)
In response to Jacqueline's post, Kimball Atwood wrote a thoughtful piece explaining his problems with EBM, and linked to an earlier piece he'd written on the same subject. I would encourage people to read Dr. Atwood's posts.
Personal Systematic Reviews
So, how could any reasonable person possibly object to the use of best evidence in making medical decisions?
The answer is that they could not, but that EBM is not necessarily synonymous with using best evidence, despite the definition above. If I'd been writing a few years ago, I would have agreed with a number of the posters attacking EBM, since many of the people I have seen promulgate EBM do so in a way that misses some basic truths.
EBM,at least as it was promoted in academic internal medicine circles in the 1990s, was very focused on a specific hierarchy of evidence (RCTs > cohort studies > case control studies > case series) and on the belief that we should train primary care internists that the right way to answer a clinical question they encountered was to perform a literature search along with their own, on the fly, systematic review. I'll come back to the hierarchy of evidence in a bit.
The idea of teaching people to perform personal systematic reviews was being pushed at a time when I was co-directing the resident journal club at my hospital. This was couched as "teaching EBM", and the various residency course directors around the country seemed to be spending a lot of time telling each other what an important activity this would be for academic general internists to engage in. I and the other director of journal club pointed out repeatedly that real systematic reviews take months, and that it was a bizarre notion to think that the right way to answer a question that arose in a clinic would be to try to do this on the cheap. A quick review of randomized trials is fraught with the likelihood of missing important pieces of evidence. When residents try to do this, they often end up focusing on the most recently published trial rather than the entirety of the available knowledge.
We argued, instead, that for most such questions residents would be far more likely to find the right answer in a textbook than by trying to interpret a body of literature in between seeing patients. (These days I would make the same argument but replace "textbooks" with "electronic resources", however that raises a conflict of interest with one of my day jobs, so I won't push that particular argument any further.)
As a result of our resistance, this curriculum never quite got implemented for our journal club, but my sense was that put our hospital in the minority. I think many programs in the late 90s and early 2000s tried to convince residents that it was sensible, practical, and necessary to perform their own systematic reviews whenever confronted with a clinical question.
Hierarchies of Evidence
Coming back to the hierarchy of evidence issue, it's clear that a system that focuses too much on the hierarchy mentioned above will reach absurd conclusions. For instance, one might decide that we have higher quality evidence that an HIV vaccine is protective than that people who wish to avoid lung cancer should not smoke. We have essentially no evidence from RCTs in humans supporting that latter hypothesis and yet we should clearly believe it to be true as surely as we believe anything in medicine, while it remains quite uncertain that an HIV vaccine is of any benefit despite the published RCT with a "statistically significant" p value.
And when effect sizes are large enough, even a couple of clinical events may provide higher quality evidence than some RCTs that try to examine questions of minor benefits of therapies. How many people have to survive meningococcal meningitis after treatment with penicillin before you can be quite sure that penicillin is lifesaving?
Dr. Atwood goes to the specific issue of biologic plausibility in the hierarchy of evidence as it applies to homeopathy. I actually agree with those who are suspicious of biologic plausibility arguments, since smart people can make almost anything sound biologically plausible. Dr. Atwood focuses more on biologic implausibility and I tend to find these arguments more compelling on average. In the case of homeopathy, he argues, it would be pointless and a waste of resources to perform RCTs examining homeopathic remedies. This is a fair point, since if homeopathy is true than our understanding of chemistry and physics is incorrect; thus it would require extreme levels of proof before we should begin to entertain the notion that a homeopathic preparation could be of benefit.
(I plan to write a future post about the burdens placed on the scientific medicine community by people who promote unproven remedies; valuable resources must then often be spent to prove the lack of benefit of the now widely popular yet worthless preparations and interventions.)
EBM and GRADE
Given what I've written you might conclude that I lean toward the anti-EBM camp. As mentioned, that's likely where I would have placed myself a few years ago. In the interim, though, I've started working with the GRADE group and in particular with Gordon Guyatt.
Through these interactions, I've come to realize that when a really smart and thoughtful clinician like Dr. Guyatt discusses EBM, the above problems and concerns mostly go away. For instance, GRADE recognizes the importance of randomized trials, but also that high quality evidence can be found in other places. The group works to formalize what might lead rational clinicians to conclude they have high quality evidence, and while I don't agree with GRADE 100% of the time, my disagreements mostly involve some edge cases that really don't arise very often.
And since Dr. Guyatt is generally credited with inventing the term EBM, it's hard to dismiss his interpretation as being unrepresentative of true EBM. Instead, I've come to believe that most of the people who have interpreted and disseminated "EBM" lacked either the clinical understanding or the epidemiologic knowledge to communicate a fair representation of EBM to the clinical world.
I'd encourage those who find themselves troubled by aspects of EBM to spend some time looking at what the GRADE Working Group is doing. Even better, if you have the chance to interact with Gordon Guyatt at one of his courses or lectures, jump at the opportunity. You likely will feel differently about EBM for having done so.
No hay comentarios.:
Publicar un comentario
Write here your comment