Vitamin C for Sepsis

This is just a quick post in response to a tweet – and hype-machine press-release – making the rounds today.

This covers a before-and-after study regarding a single-center practice change in an intensive care unit where their approach to severe sepsis was altered to a protocol including intravenous high-dose vitamin C (1.5g q6), intravenous thiamine (200mg q12), and hydrocortisone (50mg q6). Essentially, this institution hypothesized this combination might have beneficial physiologic effects and, after witnessing initial anecdotal improvement, switched to this aforementioned protocol. This report describes their outcomes in the context of comparing the treatment group to similar patients treated in the seven months prior.

In-hospital mortality for patients treated on the new protocol was 8.5%, whereas previously treated patients were subject to 40.4% mortality. Vasopressor use and acute kidney injury was similarly curtailed in the treatment group. That said, these miraculous findings – as they are exhorted in the EVMS press release – can only be considered as worthy of further study at this point. With a mere 47 patients in both treatment groups, a non-randomized, before-and-after design, and other susceptibilities to bias, these findings must be prospectively confirmed before adoption. When considered in the context of Ioannidis’ “Why Most Published Research Findings Are False”, caution is certainly advised.

I sincerely hope prospective, external validation will yield similar findings – but will likewise not be surprised if they do not.

“Hydrocortisone, Vitamin C and Thiamine for the Treatment of Severe Sepsis and Septic Shock: A Retrospective Before-After Study”
https://www.ncbi.nlm.nih.gov/pubmed/27940189

Making Urine Cultures Great Again

As this blog covered earlier this month, the diagnosis of urinary tract infection – as common and pervasive as it might be – is still fraught with diagnostic uncertainty and inconclusive likelihood ratios. In practice, clinicians combine pretest likelihood, subjective symptoms, and the urinalysis to make a decision regarding treatment – and invariably err on the side of over-treatment.

This is an interesting study taking place in the Nationwide Children’s Hospital network regarding their use of urine cultures. In retrospect, these authors noted only half of patients initially diagnosed with UTI had the diagnosis ultimately confirmed by contemporaneous urine culture. Their intervention, then, in order to reduce harm from adverse effects of antibiotics, was to contact patients following a negative urine culture result and request antibiotics be stopped.

This tied into an entire quality-improvement procedure simply to use the electronic health record to accurately follow-up the urine cultures, but over the course of the intervention, 910 patients met inclusion criteria. These patients were prescribed a total of 8,648 days of antibiotics, and the intervention obviated 3,429 (40%) of those days. Owing to increasing uptake of the study intervention by clinicians, the rate of antibiotic obviation had reached 61% by the end of the study period.

There are some obvious flaws in this sort of retrospective reporting on a QI intervention, as there was no reliable follow-up of patients included. The authors report no patients were subsequently diagnosed with a UTI within 14 days of being contacted, but this is based on only 46 patients who subsequently sought care within their healthcare system within 14 days, and not any comprehensive follow-up contact. There is no verification or antibiotics actually being discontinued following contact. Then, finally, antibiotic-free days are only a surrogate for a reduction the suspected adverse events associated with their administration.

All that said, this probably represents reasonable practice. Considering the immense frequency with which urine cultures are sent and antibiotics prescribed for dysuria, the magnitude of effect witnessed here suggests a potentially huge decrease in exposure to unnecessary antibiotics.

“Urine Culture Follow-up and Antimicrobial Stewardship in a Pediatric Urgent Care Network”
http://pediatrics.aappublications.org/content/early/2017/03/14/peds.2016-2103

Troponin Sensitivity Training

High-sensitivity troponins are finally here! The FDA has approved the first one for use in the United States. Now, articles like this are not for purely academic interest – except, well, for the likely very slow percolation of these assays into standard practice.

This is a sort of update from the Advantageous Predictors of Acute Coronary Syndrome Evaluation (APACE) consortium. This consortium is intended to “advance the early diagnosis of [acute myocardial infarction]” – via use of these high-sensitivity assays for the benefit of their study sponsors, Abbott Laboratories et al. Regardless, this is one of those typical early rule-out studies evaluating the patients with possible acute coronary syndrome and symptoms onset within 12 hours. The assay performance was evaluated and compared in four different strategies: 0-hour limit of detection, 0-hour 99th percentile cut-off, and two 0/1-hour presentation and delta strategies.

And, of course, their rule-out strategies work great – they miss a handful of AMI, and even those (as documented by their accompanying table of missed AMI) are mostly tiny, did not undergo any revascularization procedure, and frequently did not receive clinical discharge diagnoses consistent with acute coronary syndrome. There was also a clear time-based element to their rule-out sensitivity, where patients with chest pain onset within two hours of presentation being more likely missed. But – and this is the same “but” you’ve heard so many times before – their sensitivity comes at the expense of specificity, and use of any of these assay strategies was effective at ruling out only half of all ED presentations. Interestingly, at least, their rule-out was durable – 30-day MACE was 0.1% or less, and the sole event was a non-cardiac death.

Is there truly any rush to adopt these assays? I would reasonably argue there must be value in the additive information provided regarding myocardial injury. This study and its algorithms, however, demonstrates there remains progress to be made in terms of clinical effectiveness – as obviously far greater than just 50% of ED presentations for chest pain ought be eligible for discharge.

“Direct Comparison of Four Very Early Rule-Out Strategies for Acute Myocardial Infarction Using High-Sensitivity Cardiac Troponin I”
http://circ.ahajournals.org/content/early/2017/03/10/CIRCULATIONAHA.116.025661

Done Fall Out

Syncope! Not much is more frightening to patients – here they are, minding their own business and then … the floor. What caused it? Will it happen again? Sometimes, there is an obvious cause – and that’s where the fun ends.

This is the ACC/AHA guideline for evaluation of syncope – and, thankfully, it’s quite reasonable. I attribute this, mostly (and possibly erroneously) to the fantastic ED syncope guru Ben Sun being on the writing committee. Only a very small part of this document is devoted to the initial evaluation of syncope in the Emergency Department, and their strong recommendations boil down to:

  • Perform a history and physical examination
  • Perform an electrocardiogram
  • Try to determine the cause of syncope, and estimate short- and long-term risk
  • Don’t send people home from the hospital if you identify a serious medical cause

These are all straightforward things we already routinely do as part of our basic evaluation of syncope. They go on further to clearly state, with weaker recommendations, there are no other mandated tests – and that routine screening bloodwork, imaging, or cardiac testing is likely of no value.

With regard to disposition:

“The disposition decision is complicated by varying resources available for immediate testing, a lack of consensus on acceptable short-term risk of serious outcomes, varying availability and expertise of outpatient diagnostic clinics, and the lack of data demonstrating that hospital-based evaluation improves outcomes.”

Thus, the authors allow for a wide range of possible disposition decisions, ranging from ED observation on a structured protocol to non-specific outpatient management.

The rest of the document provides recommendations more relevant to cardiology management of those with specific medical causes identified, although tables 5, 6, and 7 do a fairly nice job of summarizing some of the risk-factors for serious outcomes, and some of the highlights of syncope risk scores.  While it doesn’t provide much concrete guidance, it at least does not set any low-value medicolegal precedent limiting your ability to make appropriate individual treatment decisions.

“2017 ACC/AHA/HRS Guideline for the Evaluation and Management of Patients With Syncope”
http://circ.ahajournals.org/content/early/2017/03/09/CIR.0000000000000499

The Failing Ottawa Heart

Canada! So many rules! The true north strong and free, indeed.

This latest innovation is the Ottawa Heart Failure Risk Scale – which, if you treat it explicitly as titled, is accurate and clinically interesting. However, it also masquerades as a decision rule – upon which it is of lesser standing.

This is a prospective observational derivation of a risk score for “serious adverse events” in an ED population diagnosed with acute heart failure and potential candidates for discharge. Of these 1,100 patients, 170 (15.5%) suffered an SAE – death, myocardial infarction, hospitalization. They used the differences between the groups with and without SAEs to derive a predictive risk score, the elements of which are:

• History of stroke or TIA (1)
• History of intubation for respiratory distress (2)
• Heart rate on ED arrival ≥110 (2)
• Room are SaO2 <90% on EMS or ED arrival (1)
• ECG with acute ischemic changes (2)
• Urea ≥12 mmol/L (1)

This scoring system ultimately provided a prognostic range from 2.8% for a score of zero, up to 89.0% at the top of the scale. This information is – at least within the bounds of generalizability from their study population – interesting from an informational standpoint. However, they then take it to the next level and use this as a potential decision instrument for admission versus discharge – projecting a score ≥2 would decrease admission rates while still maintaining a similar sensitivity for SAEs.

However, the foundational flaw here is the presumption admission is protective against SAEs – both here in this study and in our usual practice. Without a true, prospective validation, we have no evidence this change in and its potential decrease in admissions improves any of many potential outcome measures. Many of their SAEs may not be preventable, nor would the protections from admission be likely durable out to the end of their 14-day follow-up period. Patients were also managed for up to 12 hours in their Emergency Department before disposition, a difficult prospect for many EDs.

Finally, regardless, the complexity of care management and illness trajectory for heart failure is not a terribly ideal candidate for simplification into a dichotomous rule with just a handful of criteria. There were many univariate differences between the two groups – and that’s simply on the variables they chose to collect The decision to admit a patient for heart failure is not appropriately distilled into a “rule” – but this prognostic information may yet be of some value.

“Prospective and Explicit Clinical Validation of the Ottawa Heart Failure Risk Scale, With and Without Use of Quantitative NT-proBNP”

http://onlinelibrary.wiley.com/doi/10.1111/acem.13141/abstract

The Solution to Dilution is ….

Do we order a lot of urinalyses? Does the sun rise in the east? Does a bear ….

For a test we order with great frequency, there is actually quite a bit of complexity in its interpretation. The combination of symptoms, clinical context, the balance between sample contamination and presence of white blood cells, of nitrites and/or leukocyte esterase, and so on, can make it a relatively tricky test to interpret. The gold standard remains a urine culture.

Now – if you haven’t been already – you probably ought to be taking into account the urine specific gravity, as well.

This retrospective analysis of 14,971 children for whom paired urinalyses and urine cultures were available describes the test characteristics of WBCs/hpf, LE, and nitrites as stratified by urine specific gravity. There are a lot of numbers in this article – a “zillion” to be precise – across eighteen dense tables of +LR/-LR, sensitivity/specificity, and PPV/NPV, but the basic gist of the matter is: variations in urine concentration diminish the value of the test in different ways. As urine specific gravity increases, it becomes more likely a patient will not have a positive urine culture despite having typically diagnostic amounts of WBCs/hpf, +LE, and/or +nitrites. Likewise, with dilute urine, a lower threshold for WBCs/hpf may be needed to have adequate sensitivity.

Just one more layer to consider in this frequently used test of under-appreciated complexity.

“The Importance of Urine Concentration on the Diagnostic Performance of the Urinalysis for Pediatric Urinary Tract Infection”
https://www.ncbi.nlm.nih.gov/pubmed/28169050

Outsourcing the Brain Unnecessarily

Clinical decision instruments are all the rage, especially when incorporated into the electronic health record – why let the fallible clinician’s electrical Jello make life-or-death decisions when the untiring, unbiased digital concierge can be similarly equipped? Think about your next shift, and how frequently you consciously or unconsciously use or cite a decision instrument in your practice – HEART, NEXUS, PERC, Well’s, PECARN, the list is endless.

We spend a great deal of time deriving, validating, and comparing decision instruments – think HEART vs. TIMI vs. GRACE – but, as this article points out, very little time actually examining their performance compared to clinician judgment.

These authors reviewed all publications in Annals of Emergency Medicine concerned with the performance characteristics of a decision instrument. They identified 171 articles to this effect, 131 of which performed a prospective evaluation. Of these, the authors were able to find only 15 which actually bothered to compare the performance of the objective rule with unstructured physician assessment. With a little extra digging, these authors then identified 6 additional studies evaluating physician assessment in other journals relevant to their original 171.

Then, of these 21 articles, two favored the decision instrument: a 2003 assessment of the Canadian C-Spine Rule, and a 2002 neural network for chest pain. In the remainder, the comparison either favored clinician judgment or was a “toss up” in the sense the performance characteristics were similar and the winner depended on a value-weighting of sensitivity or specificity.

This should not discourage the derivation and evaluation of further decision instruments, as yes, the conscious and unconscious biases of human beings are valid concerns.  Neither should it be construed from these data that many common decision instruments are of lesser value than our current usage places in them, only that they have not yet been tested adequately. However, many of these simple models are simply that – and the complexity of many clinical questions will at least favor the more information-rich approach of practicing clinicians.

“Structured Clinical Decision Aids Are Seldom Compared With Subjective Physician Judgment, and are Seldom Superior”
http://www.annemergmed.com/article/S0196-0644(16)31520-7/fulltext

Punching Holes in CIN

Contrast-induced nephropathy, the scourge of modern medical imaging. Is there any way to prevent it? Most trials usually show alternative treatments are no different than saline – but what about saline itself?  Does saline even help?

This most recent publication in The Lancet claims: no. This is AMACING, a randomized, controlled trial of saline administration versus usual care in patients undergoing contrast CT. These authors recruited patients “at risk” for CIN (glomerular filtration rate 30-59 mL per min/1.73m2), and those assigned to the IV hydration arm received ~25 mL/kg over either 8 or 24 hours spanning the timeframe of the imaging procedure. Their primary outcome was incidence of CIN, as measured by an increase in serum creatinine by 25% or 44 µmol/L within 2-6 days of contrast exposure.

Regardless, despite hydration, the same exact number of patients – 8 – in each group suffered downstream CIN. This gives an absolute between groups difference of -0.1%, and a 95% CI -2.25 to 2.06. This is still technically below their threshold of non-inferiority of 2.1%, but, as the accompanying editorial rightly critiques, it still allows for a potentially meaningful difference. Secondary outcomes measured included adverse events and costs, with no reliable difference in adverse events and obvious advantages in the non-treatment group with regards to costs.

This work, despite its statistical power limitations, fits in nicely with all the other work failing to find effective preventive treatment for CIN – sodium bicarbonate, acetylcysteine, et al. Then, it may also tie into the recent publications having difficulty finding an association between IV contrast and acute kidney injury. Do these preventive treatments fail because they are ineffective, or does the clinical entity and its suspected underlying mechanism not exist?  It appears a more and more reasonable hypothesis the AKI witnessed after these small doses of IV contrast may, in fact, be related to the comorbid illness necessitating imaging, and not the imaging itself.

“Prophylactic hydration to protect renal function from intravascular iodinated contrast material in patients at high risk of contrast-induced nephropathy (AMACING): a prospective, randomised, phase 3, controlled, open-label, non-inferiority trial”

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)30057-0/abstract

Oh, The Things We Can Predict!

Philip K. Dick presented us with a short story about the “precogs”, three mutants that foresaw all crime before it could occur. “The Minority Report” was written in 1956 – and, now, 60 years later we do indeed have all manner of digital tools to predict outcomes. However, I doubt Steven Spielberg will be adapting a predictive model for hospitalization for cinema.

This is a rather simple article looking at a single-center experience at using multivariate logistic regression to predict hospitalization. This differs, somewhat, from the existing art in that it uses data available at 10, 60, and 120 minutes from the arrival to the Emergency Department as the basis for its “progressive” modeling.

Based on 58,179 visits ending in discharge and 22,683 resulting in hospitalization, the specificity of their prediction method was 90% with a sensitivity or 96%,for an AUC of 0.97. Their work exceeds prior studies mostly on account of improved specificity, compared with the AUCs of a sample of other predictive models generally between 0.85 and 0.89.

Of course, their model is of zero value to other institutions as it will overfit not only on this subset of data, but also the specific practice patterns of physicians in their hospital. Their results also conceivably could be improved, as they do not actually take into account any test results – only the presence of the order for such. That said, I think it is reasonable to suggest similar performance from temporal models for predicting admission including these earliest orders and entries in the electronic health record.

For hospitals interested in improving patient flow and anticipating disposition, there may be efficiencies to be developed from this sort of informatics solution.

“Progressive prediction of hospitalisation in the emergency department: uncovering hidden patterns to improve patient flow”
http://emj.bmj.com/content/early/2017/02/10/emermed-2014-203819

The Emergency Narcotic Dispensary

Far and away, the most common initial exposure to narcotics is through a healthcare encounter. Heroin, opium, and other preparations are far less common than the ubiquitous prescription narcotics inundating our population. As opiate overdose-related morbidity and mortality climbs, increasing focus is rightly turned to the physicians supplying these medications.

This most recent article is from the New England Journal of Medicine, and is focused on the prescriptions provided in the Emergency Department. The Emergency Department is not one of the major prescription sources of narcotics, but may be an important source of exposure, regardless. Through a retrospective analysis of a 3-year cohort of Medicare beneficiaries, these authors defined two treatment groups: patients treated by a lowest-quartile of physician opiate prescribing rates, and those treated by a highest-quartile of physician opiate prescribing rates. The lowest quartile provided narcotics to approximately 7% of ED visits, while the highest to approximately 24%. In the subsequent 12 month period, those who received treatment by the highest-quartile of physician prescribing were more likely to fill at least an additional 6-month supply of another opiate. This adjusted odds ratio of 1.30 compared with the lowest-quartile includes a dose-response relationship with the two middle quartiles, as well.

The authors note this, essentially, means for every 48 patients prescribed an opiate above the lowest prescribing baseline, one additional patient then receives a long-term prescription they otherwise would not.  Their calculation is a little odd – factoring both the additional likelihood of a prescription and the absolute increase in subsequent prescription rates.  The true value likely lies between that and the NNH calculated from the absolute percentage difference – 0.35%, or ~280.  No reliable or specific harms were detected with regards to these patients – additional Emergency Department visits, deaths by overdose, or subsequent encounters for potential side effects were similar between the groups. It is reasonable, however, to expect these additional prescriptions have some small number of downstream harms.

There are many indirect effects measured here, including pinning the entire primary outcome observation on clinical “inertia” resulting from the initial Emergency Department prescription.  They also could not, by their methods, specifically attribute a prescription for opiates to any individual physician – they used the date of an index visit matched to a filled prescription to do so.

That said, the net effect here probably relates to less-restrictive prescribing resulting in prescriptions dispensed to patients for whom dependency is more likely. The effect size is small, but across the entire healthcare system, even small effect sizes result in potentially large absolute magnitudes of effect. The takeaway is not terribly profound – physicians should be judicious as possible with regard both their prescribing rate and the number of morphine equivalents prescribed.

Finally, the article concludes with a pleasing close-up photograph of a tiger.

“Opioid-Prescribing Patterns of Emergency Physicians and Risk of Long-Term Use”
http://www.nejm.org/doi/full/10.1056/NEJMsa1610524