Image via Wikipedia |
Source: http://www.typepad.com/services/trackback/6a0120a692721d970b0120a90cc76f970b
When hierarchies of evidence are listed for the EBM world, meta-analyses of randomized trials generally sit at the pinnacle.
And yet, the actual meta-analyses that you encounter when researching a clinical question can be far less enlightening. Even if we grant a pass to the many systematic reviews at The Cochrane Collaboration that conclude with the a priori obvious fact that no high quality RCTs addressing a question have been performed, and another pass to the reviews that find a single RCT and publish its results as the results of the systematic review, we are still left with the innumerable meta-analyses that seem to provide less of a window on truth than the underlying trials.
Frequently such meta-analyses are either driven by the single large RCT that everyone would have cited anyway or, worse, a number of small, poorly-performed RCTs are combined with a moderate-sized, well-performed RCT and alter the results away from what was likely the best estimate of reality: the results of the well-performed RCT.
Meta-analysts often seem to either be too removed from their subject area and thus lack the expertise to really understand what went clinically right and wrong in the underlying RCTs (or be unwilling to use that knowledge to discriminate among the trials), or be too cozy with a single trial (typically as an author) and thus too willing to ding trials that found conflicting results.
Ultimately, meta-analysis only rarely seems to importantly advance our knowledge of an issue beyond where we would have found ourselves by just reading through the RCTs.
So with that background it is always interesting to me when a meta-analysis comes along that really seems to shed new light on a subject such that we seem to know something that we somehow didn't know when we just had the underlying trials.
An example came along in The Lancet last week.
Despite the enormous number of patients participating in randomized trials of statins, it has been uncertain what effect statins have on the development of diabetes. Some biochemical and animal studies suggested that statins might prevent diabetes. Clinical trials have been conflicting with some showing protection and other showing increased risk. In reviewing the underlying trials, it has been hard to figure out what is going on:
- Are some statins protective while others are harmful?
- Are hydrophilic statins having different effects than lipophilic statins?
- Was the observation of increased diabetes risk in the JUPITER Trial just a random event that became noticeable because of reporting bias (where positive or interesting secondary outcomes are more likely to show up in a paper than negative results).
- Are the varying results of the statin trials due to random variation around a single truth, or do the results suggest that the underlying trials differed from each other in some important way (perhaps because of the population studied, the way the statin was administered, or the way diabetes was assesses?
The new analysis found that patients treated with statins had about a 9% higher risk of diabetes than those treated with placebo or other agents. When I started reading the analysis, I had the questions in the list above already in mind and so was prepared to challenge the meta-analysis on several fronts. The authors of the analysis had appropriately anticipated my concerns and, to the extent the data allowed, answered them:
1) Was this really a chance finding driven by JUPITER? Before JUPITER found an increased risk of diabetes, there had been little discussion of statins and diabetes risk. JUPITER's findings could have been due to chance, but the publicity around the result could have triggered the meta-analysis. JUPITER was large enough to sway the results in the meta-analysis and perhaps lead to a self-fulfilling conclusion based in random variation. The meta-analysis, though, did a secondary analysis that excluded JUPITER, and found that the results were essentially the same.
2) Were the varying results in the trials due to random variation or true differences? The meta-analysis found little need to invoke anything more than randomness (as measured by a statistic called the I2). What had seemed to be conflicting results was likely nearly entirely due to random variation around a likely single true effect of slightly increased risk of diabetes.
3) Are some statins protective while others cause diabetes? The finding of little heterogeneity suggests the answer is no, but ultimately this is a hard question to answer definitively because of the more limited data about each individual statin. The meta-analysis found that the confidence intervals of the effects for individual statins overlapped such that it seemed unlikely that there were important differences among the statins, but it's hard to be certain. Additionally, lipophilic and hydrophilic statins showed the same effects on diabetes. And beyond that, the meta-analysis found that one of the main trials that had suggested a protective effect of pravastatin on diabetes had used an unusual definition of diabetes, and the effect was not seen when they substituted a standard definition.
While no new trials were published, as a result of this meta-analysis we have a much better feel for the effect of statins on diabetes than we had a few weeks ago. So, if after hours of trying to answer clinical questions by reading Cochrane you find yourself wondering whether meta-analyses are ever worth the effort that seems to go into them, remember this one and how much we learned about diabetes and statins from a new analysis of existing data.
-->
No hay comentarios.:
Publicar un comentario
Write here your comment