Punching Holes in CIN

Contrast-induced nephropathy, the scourge of modern medical imaging. Is there any way to prevent it? Most trials usually show alternative treatments are no different than saline – but what about saline itself?  Does saline even help?

This most recent publication in The Lancet claims: no. This is AMACING, a randomized, controlled trial of saline administration versus usual care in patients undergoing contrast CT. These authors recruited patients “at risk” for CIN (glomerular filtration rate 30-59 mL per min/1.73m2), and those assigned to the IV hydration arm received ~25 mL/kg over either 8 or 24 hours spanning the timeframe of the imaging procedure. Their primary outcome was incidence of CIN, as measured by an increase in serum creatinine by 25% or 44 µmol/L within 2-6 days of contrast exposure.

Regardless, despite hydration, the same exact number of patients – 8 – in each group suffered downstream CIN. This gives an absolute between groups difference of -0.1%, and a 95% CI -2.25 to 2.06. This is still technically below their threshold of non-inferiority of 2.1%, but, as the accompanying editorial rightly critiques, it still allows for a potentially meaningful difference. Secondary outcomes measured included adverse events and costs, with no reliable difference in adverse events and obvious advantages in the non-treatment group with regards to costs.

This work, despite its statistical power limitations, fits in nicely with all the other work failing to find effective preventive treatment for CIN – sodium bicarbonate, acetylcysteine, et al. Then, it may also tie into the recent publications having difficulty finding an association between IV contrast and acute kidney injury. Do these preventive treatments fail because they are ineffective, or does the clinical entity and its suspected underlying mechanism not exist?  It appears a more and more reasonable hypothesis the AKI witnessed after these small doses of IV contrast may, in fact, be related to the comorbid illness necessitating imaging, and not the imaging itself.

“Prophylactic hydration to protect renal function from intravascular iodinated contrast material in patients at high risk of contrast-induced nephropathy (AMACING): a prospective, randomised, phase 3, controlled, open-label, non-inferiority trial”


Oh, The Things We Can Predict!

Philip K. Dick presented us with a short story about the “precogs”, three mutants that foresaw all crime before it could occur. “The Minority Report” was written in 1956 – and, now, 60 years later we do indeed have all manner of digital tools to predict outcomes. However, I doubt Steven Spielberg will be adapting a predictive model for hospitalization for cinema.

This is a rather simple article looking at a single-center experience at using multivariate logistic regression to predict hospitalization. This differs, somewhat, from the existing art in that it uses data available at 10, 60, and 120 minutes from the arrival to the Emergency Department as the basis for its “progressive” modeling.

Based on 58,179 visits ending in discharge and 22,683 resulting in hospitalization, the specificity of their prediction method was 90% with a sensitivity or 96%,for an AUC of 0.97. Their work exceeds prior studies mostly on account of improved specificity, compared with the AUCs of a sample of other predictive models generally between 0.85 and 0.89.

Of course, their model is of zero value to other institutions as it will overfit not only on this subset of data, but also the specific practice patterns of physicians in their hospital. Their results also conceivably could be improved, as they do not actually take into account any test results – only the presence of the order for such. That said, I think it is reasonable to suggest similar performance from temporal models for predicting admission including these earliest orders and entries in the electronic health record.

For hospitals interested in improving patient flow and anticipating disposition, there may be efficiencies to be developed from this sort of informatics solution.

“Progressive prediction of hospitalisation in the emergency department: uncovering hidden patterns to improve patient flow”

The Emergency Narcotic Dispensary

Far and away, the most common initial exposure to narcotics is through a healthcare encounter. Heroin, opium, and other preparations are far less common than the ubiquitous prescription narcotics inundating our population. As opiate overdose-related morbidity and mortality climbs, increasing focus is rightly turned to the physicians supplying these medications.

This most recent article is from the New England Journal of Medicine, and is focused on the prescriptions provided in the Emergency Department. The Emergency Department is not one of the major prescription sources of narcotics, but may be an important source of exposure, regardless. Through a retrospective analysis of a 3-year cohort of Medicare beneficiaries, these authors defined two treatment groups: patients treated by a lowest-quartile of physician opiate prescribing rates, and those treated by a highest-quartile of physician opiate prescribing rates. The lowest quartile provided narcotics to approximately 7% of ED visits, while the highest to approximately 24%. In the subsequent 12 month period, those who received treatment by the highest-quartile of physician prescribing were more likely to fill at least an additional 6-month supply of another opiate. This adjusted odds ratio of 1.30 compared with the lowest-quartile includes a dose-response relationship with the two middle quartiles, as well.

The authors note this, essentially, means for every 48 patients prescribed an opiate above the lowest prescribing baseline, one additional patient then receives a long-term prescription they otherwise would not.  Their calculation is a little odd – factoring both the additional likelihood of a prescription and the absolute increase in subsequent prescription rates.  The true value likely lies between that and the NNH calculated from the absolute percentage difference – 0.35%, or ~280.  No reliable or specific harms were detected with regards to these patients – additional Emergency Department visits, deaths by overdose, or subsequent encounters for potential side effects were similar between the groups. It is reasonable, however, to expect these additional prescriptions have some small number of downstream harms.

There are many indirect effects measured here, including pinning the entire primary outcome observation on clinical “inertia” resulting from the initial Emergency Department prescription.  They also could not, by their methods, specifically attribute a prescription for opiates to any individual physician – they used the date of an index visit matched to a filled prescription to do so.

That said, the net effect here probably relates to less-restrictive prescribing resulting in prescriptions dispensed to patients for whom dependency is more likely. The effect size is small, but across the entire healthcare system, even small effect sizes result in potentially large absolute magnitudes of effect. The takeaway is not terribly profound – physicians should be judicious as possible with regard both their prescribing rate and the number of morphine equivalents prescribed.

Finally, the article concludes with a pleasing close-up photograph of a tiger.

“Opioid-Prescribing Patterns of Emergency Physicians and Risk of Long-Term Use”

Thrombolysis and the Aging Brain

The bleeding complications of thrombolysis are well-described, but frequently under-appreciated in the acute setting. Stroke patients often disappear upstairs after treatment in the Emergency Department quickly enough that we rarely see the neurologic worsening associated with post-thrombolysis hemorrhage.

Risk factors for post-tPA ICH are well-known, but often difficult to precisely pin down for an individual patient. This study pools patients from up to 15 studies to evaluate the effect of leukoariosis on post-tPA hemorrhage. Leukoariosis, essentially, is a cerebral small vessel disease likely related to chronic ischemic damage. It has been long-recognized as a risk factor for increased hemorrhage and poor outcome, independent of age at treatment.

In this study, authors pooled approximately 5,500 patients, half of which were identified to have leukoariosis. The unadjusted absolute risk of symptomatic ICH in those without leukoariosis was 4.1%, while the risk of those with was 6.6%. Then, looking at the 2,700 patients with leukoariosis, those with mild disease had an unadjusted absolute risk of 4.0%, compared with 10.2% for those with moderate or severe. Similar trends towards worse functional outcomes were also seen with regards to worsening leukoariosis.

The moral of the story: the baseline health of the brain matters. When discussing the risks, benefits, and alternatives for informed consent with a family, these substantial risks in those patients with leukoariosis should be clearly conveyed with regards to appropriateness of tPA when otherwise potentially indicated.

“Leukoaraiosis, intracerebral hemorrhage, and functional outcome after acute stroke thrombolysis”


Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”

Can We Trust Our Computer ECG Overlords?

If your practice is like my practice you see a lot of ECGs from triage. ECGs obtained for abdominal pain, dizziness, numbness, fatigue, rectal pain … and some, I assume, are for chest pain. Every one of these ECGs turns into an interruption for review to ensure no concerning evolving syndrome is missed.

But, a great number of these ECGs are read as “Normal” by the computer – and, anecdotally, are nearly universally correct.  This raises a very reasonable point as to question whether a human need be involved at all.

This simple study tries to examine the real-world performance of computer ECG reading, specifically, the Marquette 12SL software. Over a 16-week convenience sample period, 855 triage ECGs were performed, 222 of which were reported as “Normal” by the computer software. These 222 ECGs were all reviewed by a cardiologist, and 13 were ultimately assigned some pathology – of which all were mild, non-specific abnormalities. Two Emergency Physicians also then reviewed these 13 ECGs to determine what, if any, actions might be taken if presented to them in a real-world context. One of these ECGs was determined by one EP to be sufficient to put the patient in the next available bed from triage, while the remainder required no acute triage intervention. Retrospectively, the patient judged to have an actionable ECG was discharged from the ED and had a normal stress test the next day.

The authors conclude this negative predictive value for a “Normal” read of the ECG approaches 99%, and could potentially lead to changes in practice regarding immediate review of triage ECGs. While these findings have some limitations in generalizability regarding the specific ECG software and a relatively small sample, I think they’re on the right track. Interruptions in a multi-tasking setting lead to errors of task resumption, while the likelihood of significant time-sensitive pathology being missed is quite low. I tend to agree this could be a reasonable quality improvement intervention with prospective monitoring.

“Safety of Computer Interpretation of Normal Triage Electrocardiograms”

Discharged and Dropped Dead

The Emergency Department is a land of uncertainty. Generally a time-compressed, zero-continuity environment with limited resources, we frequently need to make relatively rapid decisions based on incomplete information. The goal, in general, is to treat and disposition patients in an advantageous fashion to prevent morbidity and mortality, while minimizing the costs and other harms.

The consequence of this confluence of factors leads, unfortunately, to a handful of patients who meet their unfortunate end following discharge. A Kaiser Permanente Emergency Department cohort analysis found 0.05% died within 7 days of discharge, and identified a few interesting risk factors regarding their outcomes. This new article, in the BMJ, describes the outcomes of a Medicare cohort following discharge – and finds both similarities and differences.

One notable difference, and a focus of the authors, is that 0.12% of patients discharged from the Emergency Department died within 7 days. This is a much larger proportion than the Kaiser cohort, however, the Medicare population is obviously a much older cohort with greater comorbidities. Then, they found similarities regarding the risks for death – most prominently, “altered mental status”. The full accounting of clinical features is described in the figure below:

Then, there were some system-level factors as well. Potentially, rural emergency departments and those with low annual volumes contributed in their multivariate model to increased risk of death. This data set is insufficient to draw any specific conclusions regarding these contributing factors, but it raises questions for future research. In general, however, this is interesting – and not terribly surprising data – even if it is hard to identify specific operational interventions based on these broad strokes.

“Early death after discharge from emergency departments: analysis of national US insurance claims data”

The Intravenous Contrast Debate

Does intravenous contrast exposure increase the likelihood of developing renal insufficiency? The consensus opinion has been, generally, “yes”. However, evaluated under a closer lens, it is apparent some of these data come from high-dose use during angiography, from exposure to high-osmolar contrast material not routinely used in present day, and weak evidence from observational cohort studies.

The modern take is, increasingly, potentially “no”. However, it is virtually impossible to conclusively study the effect of intravenous contrast exposure. A prospective, controlled trial would require patients for whom a contrast study was believed important to their medical care be randomized to not receiving the indicated study, leading to all manner of potential harms. Therefore, we are reduced to looking backwards and comparing patients undergoing a contrasted study with those who do not.

This study is probably the best style of this type of evidence we are going to get. This is a propensity-matched analysis of patients undergoing contrast CT, non-contrast CT, and those not undergoing CT at all. Between 5,000 and 7,000 patients comprised each cohort, and these were stratified by baseline comorbidities, medications administered, illness severity indicators, and baseline renal function. After these various adjustments and weighting, the authors did not observe any effect on subsequent acute kidney injury relating to the administration of intravenous contrast – limited to patients with a creatinine of 4.0 mg/dL or below at baseline.

I think this is basically a reasonable conclusion, given the approach. There has been a fair bit of observational content regarding the risk of AKI after a contrast CT, but it is impossible separate the effect of contrast from the effects of the concurrent medical illness requiring the contrast CT. Every effort, of course, should be taken to minimize the use of advanced imaging – but in many instances, the morbidity of a missed diagnosis almost certainly outweighs the risk from intravenous contrast.

“Risk of Acute Kidney Injury After Intravenous Contrast Media Administration”

Pediatric Lactate & Sepsis

Some syndicated media has “Shark Week”. We have Sepsis Week!

The current generation of sepsis care is defined not just by our quixotic quest for simplified early warning tools, but also, more than anything, by lactate levels. In someways, lactate is our friend – no more central catheter placement solely for measurement of central venous oxygenation. However, the ease of use of checking a lactate level also means we apply it indiscriminately. The lactate has become the D-dimer of infection – increasingly weakly predictive, the more we rely upon it.

This is a snapshot of the performance of lactate levels in pediatric sepsis. This is an observational registry of patients evaluated in the Emergency Department of a pediatric hospital, consisting of 1,299 patients in whom clinically suspected sepsis resulted in a lactate order. These authors hypothesized that, as in adults, a lactate level of 36mg/dL (4mmol/L) would portend increased mortality.

And, naturally, they were correct. However, its predictive value was virtually nil. There were 103 patients with lactate elevated above their cut-off and 1,196 below. Only 5 of the 103 patients elevated lactate suffered 30-day mortality. Then, of the 1,196 below the cut-off, 20 suffered 30-day mortality. A mortality of 4.8% is higher than 1.7%, but the sensitivity is only 20% – and the specificity of 92.3% with such a low prevalence of the primary outcome means over 95% of elevated lactate levels are “false positives”.

There are some limitations here, however, that could have a substantial effects on the outcomes. There is a selection bias inherent to eligibility in which lactates were likely ordered only on the most ill-appearing patients. The effect of this would be to improve the apparent performance characteristics of the test in the study population. However, then, it is likely the patients with elevated lactate levels received more aggressive treatment than if the treating clinicians were blinded to the result. The effect of this would be a mortality benefit in the population with elevated lactate, worsening the apparent test characteristics.

But, hairs split aside, these pediatric results are grossly similar to those in adults. An elevated lactate is a warning signal, but should hardly be relied upon.

“Association Between Early Lactate Levels and 30-Day Mortality in Clinically Suspected Sepsis in Children”