This fact was driven home for me with a recent publication. Several weeks ago an article was published in Critical Care Medicine entitled "Etomidate is associated with mortality and adrenal insufficiency in sepsis: A meta-analysis."
The point of this post is not to debate if etomidate should be used to intubate septic patients. Etomidate very well may kill people with sepsis. I just don't know from the data currently available. Using this meta-analysis as an example, the goal is to point out two important areas where we could stand to sharpen our literature evaluation skills.
Point #1: Choose (and interpret) your titles wisely.
It is an overwhelming task to skim through several journals' Table of Contents each month. In a specialty such as Emergency Medicine, many relevant articles appear in non-EM journals making it even more challenging. It's tempting to think we know what an article concluded, based solely on its title.
This point particularly applies to those who publish. But readers also should use extreme caution if only reading titles and abstracts. Given that the last several articles on this topic found that etomidate did not increase mortality when given as an induction agent to septic patients, I was quite surprised to see this bold title declaring etomidate is associated with mortality. We're all so busy that it would be very easy to simply see this title and assume it to be true, without ever reading the article. That is very dangerous medicine, in my opinion. And, this principle extends far beyond this one meta-analysis.
I've already seen etomidate avoided in a hypotensive, septic patient based on this article. I've also heard colleagues giving a quick summary of the article to students and residents saying this article "confirms what we already knew." What?!? When did we definitively "know" this? I still can't believe a highly regarded journal such as Critical Care Medicine would allow this article to be published with this title.
Point #2: The meta-analysis is not the end-all-be-all of publications.
We've all sat through some sort of literature evaluation class back in school. When the meta-analysis was described to me as a student, I remember thinking how awesome it was. Let me get this straight... people way smarter than me are going to take all of the articles published on a given topic, perform some fancy (way over my head) statistics, and give us an evidence-based conclusion? Sign me up. Coming out of pharmacy school, I pretty much thought meta-analyses were the cream of the crop when it came to the published literature. How wrong I was.
I shouldn't have to go back and analyze each of the articles the authors used, but that is exactly what I did in this case. Here is what I found:
With regard to mortality, 5 trials were included. The 4 smaller ones mostly demonstrated that etomidate did not increase mortality compared to other agents. However, the one larger trial encompassing 499 of the 865 total patients (58%) did show an increase in mortality. It was published by Cuthbertson, et al in Intensive Care Medicine in 2009.
Let's take a closer look at this ICM study.
It was such a large contributor to the meta-analysis outcome, it seems important to understand what that trial was all about. Despite the authors calling it an a-priori sub study of the CORTICUS trial, it was actually a post-hoc analysis looking at etomidate's association with mortality. You can read the two published commentaries to the Cuthbertson study by Pallin and Andrade, which each highlight several major issues with the data in this trial.
The bottom line is that the trial by Cuthbertson was highly flawed and really doesn't give us any insight as to etomidate's contribution to mortality. In fact, one of the biggest critiques was that physicians in the CORTICUS trial were instructed to avoid etomidate due to its propensity to suppress cortisol production. So, when physicians did use it, there was likely a reason for it (ie, the patient was hemodynamically unstable and they didn't have many other good induction agent options). Therefore, etomidate was probably given to the sicker patients already more likely to die from the start.
If you dig even deeper, you'll find that the Cuthbertson group used two logistical regression models. One showed a nonsignificant increase in mortality while the other showed a significant increase. Of course the statistically significant one was reported in the abstract. The bottom line is that if you use bad data to construct a meta-analysis, you'll end up with a bad meta-analysis.
So where does this leave us?
In part, it means we have to remain as skeptical as ever when reading published articles. We already know titles and abstracts don't give the full picture. Taking into account reporting biases, funding sources, and even authors' personal/professional agendas, it seems we can't always rely on the peer-review process to uphold the highest standards of integrity. The best journals out there aren't immune. One reason I love Free Open Access Meducation (FOAMed) is that the peer-review process is instant and no holds barred. If you post something that is inaccurate or controversial on Twitter or a medical education blog, you will get called out on it. The best part is that the ensuing conversations inevitably lead to knowledge sharing and learning. Isn't that what research is supposed to be about after all?
Dr. Joe Lex said it best on Twitter:
I couldn't agree more.
References:
- Chan CM, et al. Etomidate is associated with mortality and adrenal insufficiency: A meta-analysis. Crit Care Med 2012;40(11):2945-53. [PMID 22971586]
- Cuthbertson BH, et al. The effects of etomidate on adrenal responsiveness and mortality in patients with septic shock. Intensive Care Med 2009;35(11):1868-76. [PMID 19652948]
- Pallin DJ, Walls RM. The safety of single-dose etomidate. Intensive Care Med 2010;36(7):1268-70. [PMIS 20405278]
- Andrade FM. Is etomidate really that bad in septic patients? Intensive Care Med 2010;36(7):1266-70. [PMID 20405279]