Outsourcing the Brain Unnecessarily

Clinical decision instruments are all the rage, especially when incorporated into the electronic health record – why let the fallible clinician’s electrical Jello make life-or-death decisions when the untiring, unbiased digital concierge can be similarly equipped? Think about your next shift, and how frequently you consciously or unconsciously use or cite a decision instrument in your practice – HEART, NEXUS, PERC, Well’s, PECARN, the list is endless.

We spend a great deal of time deriving, validating, and comparing decision instruments – think HEART vs. TIMI vs. GRACE – but, as this article points out, very little time actually examining their performance compared to clinician judgment.

These authors reviewed all publications in Annals of Emergency Medicine concerned with the performance characteristics of a decision instrument. They identified 171 articles to this effect, 131 of which performed a prospective evaluation. Of these, the authors were able to find only 15 which actually bothered to compare the performance of the objective rule with unstructured physician assessment. With a little extra digging, these authors then identified 6 additional studies evaluating physician assessment in other journals relevant to their original 171.

Then, of these 21 articles, two favored the decision instrument: a 2003 assessment of the Canadian C-Spine Rule, and a 2002 neural network for chest pain. In the remainder, the comparison either favored clinician judgment or was a “toss up” in the sense the performance characteristics were similar and the winner depended on a value-weighting of sensitivity or specificity.

This should not discourage the derivation and evaluation of further decision instruments, as yes, the conscious and unconscious biases of human beings are valid concerns.  Neither should it be construed from these data that many common decision instruments are of lesser value than our current usage places in them, only that they have not yet been tested adequately. However, many of these simple models are simply that – and the complexity of many clinical questions will at least favor the more information-rich approach of practicing clinicians.

“Structured Clinical Decision Aids Are Seldom Compared With Subjective Physician Judgment, and are Seldom Superior”
http://www.annemergmed.com/article/S0196-0644(16)31520-7/fulltext

Punching Holes in CIN

Contrast-induced nephropathy, the scourge of modern medical imaging. Is there any way to prevent it? Most trials usually show alternative treatments are no different than saline – but what about saline itself?  Does saline even help?

This most recent publication in The Lancet claims: no. This is AMACING, a randomized, controlled trial of saline administration versus usual care in patients undergoing contrast CT. These authors recruited patients “at risk” for CIN (glomerular filtration rate 30-59 mL per min/1.73m2), and those assigned to the IV hydration arm received ~25 mL/kg over either 8 or 24 hours spanning the timeframe of the imaging procedure. Their primary outcome was incidence of CIN, as measured by an increase in serum creatinine by 25% or 44 µmol/L within 2-6 days of contrast exposure.

Regardless, despite hydration, the same exact number of patients – 8 – in each group suffered downstream CIN. This gives an absolute between groups difference of -0.1%, and a 95% CI -2.25 to 2.06. This is still technically below their threshold of non-inferiority of 2.1%, but, as the accompanying editorial rightly critiques, it still allows for a potentially meaningful difference. Secondary outcomes measured included adverse events and costs, with no reliable difference in adverse events and obvious advantages in the non-treatment group with regards to costs.

This work, despite its statistical power limitations, fits in nicely with all the other work failing to find effective preventive treatment for CIN – sodium bicarbonate, acetylcysteine, et al. Then, it may also tie into the recent publications having difficulty finding an association between IV contrast and acute kidney injury. Do these preventive treatments fail because they are ineffective, or does the clinical entity and its suspected underlying mechanism not exist?  It appears a more and more reasonable hypothesis the AKI witnessed after these small doses of IV contrast may, in fact, be related to the comorbid illness necessitating imaging, and not the imaging itself.

“Prophylactic hydration to protect renal function from intravascular iodinated contrast material in patients at high risk of contrast-induced nephropathy (AMACING): a prospective, randomised, phase 3, controlled, open-label, non-inferiority trial”

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)30057-0/abstract

Oh, The Things We Can Predict!

Philip K. Dick presented us with a short story about the “precogs”, three mutants that foresaw all crime before it could occur. “The Minority Report” was written in 1956 – and, now, 60 years later we do indeed have all manner of digital tools to predict outcomes. However, I doubt Steven Spielberg will be adapting a predictive model for hospitalization for cinema.

This is a rather simple article looking at a single-center experience at using multivariate logistic regression to predict hospitalization. This differs, somewhat, from the existing art in that it uses data available at 10, 60, and 120 minutes from the arrival to the Emergency Department as the basis for its “progressive” modeling.

Based on 58,179 visits ending in discharge and 22,683 resulting in hospitalization, the specificity of their prediction method was 90% with a sensitivity or 96%,for an AUC of 0.97. Their work exceeds prior studies mostly on account of improved specificity, compared with the AUCs of a sample of other predictive models generally between 0.85 and 0.89.

Of course, their model is of zero value to other institutions as it will overfit not only on this subset of data, but also the specific practice patterns of physicians in their hospital. Their results also conceivably could be improved, as they do not actually take into account any test results – only the presence of the order for such. That said, I think it is reasonable to suggest similar performance from temporal models for predicting admission including these earliest orders and entries in the electronic health record.

For hospitals interested in improving patient flow and anticipating disposition, there may be efficiencies to be developed from this sort of informatics solution.

“Progressive prediction of hospitalisation in the emergency department: uncovering hidden patterns to improve patient flow”
http://emj.bmj.com/content/early/2017/02/10/emermed-2014-203819

The Emergency Narcotic Dispensary

Far and away, the most common initial exposure to narcotics is through a healthcare encounter. Heroin, opium, and other preparations are far less common than the ubiquitous prescription narcotics inundating our population. As opiate overdose-related morbidity and mortality climbs, increasing focus is rightly turned to the physicians supplying these medications.

This most recent article is from the New England Journal of Medicine, and is focused on the prescriptions provided in the Emergency Department. The Emergency Department is not one of the major prescription sources of narcotics, but may be an important source of exposure, regardless. Through a retrospective analysis of a 3-year cohort of Medicare beneficiaries, these authors defined two treatment groups: patients treated by a lowest-quartile of physician opiate prescribing rates, and those treated by a highest-quartile of physician opiate prescribing rates. The lowest quartile provided narcotics to approximately 7% of ED visits, while the highest to approximately 24%. In the subsequent 12 month period, those who received treatment by the highest-quartile of physician prescribing were more likely to fill at least an additional 6-month supply of another opiate. This adjusted odds ratio of 1.30 compared with the lowest-quartile includes a dose-response relationship with the two middle quartiles, as well.

The authors note this, essentially, means for every 48 patients prescribed an opiate above the lowest prescribing baseline, one additional patient then receives a long-term prescription they otherwise would not.  Their calculation is a little odd – factoring both the additional likelihood of a prescription and the absolute increase in subsequent prescription rates.  The true value likely lies between that and the NNH calculated from the absolute percentage difference – 0.35%, or ~280.  No reliable or specific harms were detected with regards to these patients – additional Emergency Department visits, deaths by overdose, or subsequent encounters for potential side effects were similar between the groups. It is reasonable, however, to expect these additional prescriptions have some small number of downstream harms.

There are many indirect effects measured here, including pinning the entire primary outcome observation on clinical “inertia” resulting from the initial Emergency Department prescription.  They also could not, by their methods, specifically attribute a prescription for opiates to any individual physician – they used the date of an index visit matched to a filled prescription to do so.

That said, the net effect here probably relates to less-restrictive prescribing resulting in prescriptions dispensed to patients for whom dependency is more likely. The effect size is small, but across the entire healthcare system, even small effect sizes result in potentially large absolute magnitudes of effect. The takeaway is not terribly profound – physicians should be judicious as possible with regard both their prescribing rate and the number of morphine equivalents prescribed.

Finally, the article concludes with a pleasing close-up photograph of a tiger.

“Opioid-Prescribing Patterns of Emergency Physicians and Risk of Long-Term Use”
http://www.nejm.org/doi/full/10.1056/NEJMsa1610524

Thrombolysis and the Aging Brain

The bleeding complications of thrombolysis are well-described, but frequently under-appreciated in the acute setting. Stroke patients often disappear upstairs after treatment in the Emergency Department quickly enough that we rarely see the neurologic worsening associated with post-thrombolysis hemorrhage.

Risk factors for post-tPA ICH are well-known, but often difficult to precisely pin down for an individual patient. This study pools patients from up to 15 studies to evaluate the effect of leukoariosis on post-tPA hemorrhage. Leukoariosis, essentially, is a cerebral small vessel disease likely related to chronic ischemic damage. It has been long-recognized as a risk factor for increased hemorrhage and poor outcome, independent of age at treatment.

In this study, authors pooled approximately 5,500 patients, half of which were identified to have leukoariosis. The unadjusted absolute risk of symptomatic ICH in those without leukoariosis was 4.1%, while the risk of those with was 6.6%. Then, looking at the 2,700 patients with leukoariosis, those with mild disease had an unadjusted absolute risk of 4.0%, compared with 10.2% for those with moderate or severe. Similar trends towards worse functional outcomes were also seen with regards to worsening leukoariosis.

The moral of the story: the baseline health of the brain matters. When discussing the risks, benefits, and alternatives for informed consent with a family, these substantial risks in those patients with leukoariosis should be clearly conveyed with regards to appropriateness of tPA when otherwise potentially indicated.

“Leukoaraiosis, intracerebral hemorrhage, and functional outcome after acute stroke thrombolysis”

http://www.neurology.org/content/early/2017/01/27/WNL.0000000000003605.abstract

Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”
http://www.cmaj.ca/content/189/4/E139.full

Can We Trust Our Computer ECG Overlords?

If your practice is like my practice you see a lot of ECGs from triage. ECGs obtained for abdominal pain, dizziness, numbness, fatigue, rectal pain … and some, I assume, are for chest pain. Every one of these ECGs turns into an interruption for review to ensure no concerning evolving syndrome is missed.

But, a great number of these ECGs are read as “Normal” by the computer – and, anecdotally, are nearly universally correct.  This raises a very reasonable point as to question whether a human need be involved at all.

This simple study tries to examine the real-world performance of computer ECG reading, specifically, the Marquette 12SL software. Over a 16-week convenience sample period, 855 triage ECGs were performed, 222 of which were reported as “Normal” by the computer software. These 222 ECGs were all reviewed by a cardiologist, and 13 were ultimately assigned some pathology – of which all were mild, non-specific abnormalities. Two Emergency Physicians also then reviewed these 13 ECGs to determine what, if any, actions might be taken if presented to them in a real-world context. One of these ECGs was determined by one EP to be sufficient to put the patient in the next available bed from triage, while the remainder required no acute triage intervention. Retrospectively, the patient judged to have an actionable ECG was discharged from the ED and had a normal stress test the next day.

The authors conclude this negative predictive value for a “Normal” read of the ECG approaches 99%, and could potentially lead to changes in practice regarding immediate review of triage ECGs. While these findings have some limitations in generalizability regarding the specific ECG software and a relatively small sample, I think they’re on the right track. Interruptions in a multi-tasking setting lead to errors of task resumption, while the likelihood of significant time-sensitive pathology being missed is quite low. I tend to agree this could be a reasonable quality improvement intervention with prospective monitoring.

“Safety of Computer Interpretation of Normal Triage Electrocardiograms”
https://www.ncbi.nlm.nih.gov/pubmed/27519772

Discharged and Dropped Dead

The Emergency Department is a land of uncertainty. Generally a time-compressed, zero-continuity environment with limited resources, we frequently need to make relatively rapid decisions based on incomplete information. The goal, in general, is to treat and disposition patients in an advantageous fashion to prevent morbidity and mortality, while minimizing the costs and other harms.

The consequence of this confluence of factors leads, unfortunately, to a handful of patients who meet their unfortunate end following discharge. A Kaiser Permanente Emergency Department cohort analysis found 0.05% died within 7 days of discharge, and identified a few interesting risk factors regarding their outcomes. This new article, in the BMJ, describes the outcomes of a Medicare cohort following discharge – and finds both similarities and differences.

One notable difference, and a focus of the authors, is that 0.12% of patients discharged from the Emergency Department died within 7 days. This is a much larger proportion than the Kaiser cohort, however, the Medicare population is obviously a much older cohort with greater comorbidities. Then, they found similarities regarding the risks for death – most prominently, “altered mental status”. The full accounting of clinical features is described in the figure below:


Then, there were some system-level factors as well. Potentially, rural emergency departments and those with low annual volumes contributed in their multivariate model to increased risk of death. This data set is insufficient to draw any specific conclusions regarding these contributing factors, but it raises questions for future research. In general, however, this is interesting – and not terribly surprising data – even if it is hard to identify specific operational interventions based on these broad strokes.

“Early death after discharge from emergency departments: analysis of national US insurance claims data”
http://www.bmj.com/content/356/bmj.j239

The Intravenous Contrast Debate

Does intravenous contrast exposure increase the likelihood of developing renal insufficiency? The consensus opinion has been, generally, “yes”. However, evaluated under a closer lens, it is apparent some of these data come from high-dose use during angiography, from exposure to high-osmolar contrast material not routinely used in present day, and weak evidence from observational cohort studies.

The modern take is, increasingly, potentially “no”. However, it is virtually impossible to conclusively study the effect of intravenous contrast exposure. A prospective, controlled trial would require patients for whom a contrast study was believed important to their medical care be randomized to not receiving the indicated study, leading to all manner of potential harms. Therefore, we are reduced to looking backwards and comparing patients undergoing a contrasted study with those who do not.

This study is probably the best style of this type of evidence we are going to get. This is a propensity-matched analysis of patients undergoing contrast CT, non-contrast CT, and those not undergoing CT at all. Between 5,000 and 7,000 patients comprised each cohort, and these were stratified by baseline comorbidities, medications administered, illness severity indicators, and baseline renal function. After these various adjustments and weighting, the authors did not observe any effect on subsequent acute kidney injury relating to the administration of intravenous contrast – limited to patients with a creatinine of 4.0 mg/dL or below at baseline.

I think this is basically a reasonable conclusion, given the approach. There has been a fair bit of observational content regarding the risk of AKI after a contrast CT, but it is impossible separate the effect of contrast from the effects of the concurrent medical illness requiring the contrast CT. Every effort, of course, should be taken to minimize the use of advanced imaging – but in many instances, the morbidity of a missed diagnosis almost certainly outweighs the risk from intravenous contrast.

“Risk of Acute Kidney Injury After Intravenous Contrast Media Administration”
http://www.annemergmed.com/article/S0196-0644(16)31388-9/abstract