The Emergency Narcotic Dispensary

Far and away, the most common initial exposure to narcotics is through a healthcare encounter. Heroin, opium, and other preparations are far less common than the ubiquitous prescription narcotics inundating our population. As opiate overdose-related morbidity and mortality climbs, increasing focus is rightly turned to the physicians supplying these medications.

This most recent article is from the New England Journal of Medicine, and is focused on the prescriptions provided in the Emergency Department. The Emergency Department is not one of the major prescription sources of narcotics, but may be an important source of exposure, regardless. Through a retrospective analysis of a 3-year cohort of Medicare beneficiaries, these authors defined two treatment groups: patients treated by a lowest-quartile of physician opiate prescribing rates, and those treated by a highest-quartile of physician opiate prescribing rates. The lowest quartile provided narcotics to approximately 7% of ED visits, while the highest to approximately 24%. In the subsequent 12 month period, those who received treatment by the highest-quartile of physician prescribing were more likely to fill at least an additional 6-month supply of another opiate. This adjusted odds ratio of 1.30 compared with the lowest-quartile includes a dose-response relationship with the two middle quartiles, as well.

The authors note this, essentially, means for every 48 patients prescribed an opiate above the lowest prescribing baseline, one additional patient then receives a long-term prescription they otherwise would not.  Their calculation is a little odd – factoring both the additional likelihood of a prescription and the absolute increase in subsequent prescription rates.  The true value likely lies between that and the NNH calculated from the absolute percentage difference – 0.35%, or ~280.  No reliable or specific harms were detected with regards to these patients – additional Emergency Department visits, deaths by overdose, or subsequent encounters for potential side effects were similar between the groups. It is reasonable, however, to expect these additional prescriptions have some small number of downstream harms.

There are many indirect effects measured here, including pinning the entire primary outcome observation on clinical “inertia” resulting from the initial Emergency Department prescription. That said, the net effect here probably relates to less-restrictive prescribing resulting in prescriptions dispensed to patients for whom dependency is more likely. The effect size is small, but across the entire healthcare system, even small effect sizes result in potentially large absolute magnitudes of effect. The takeaway is not terribly profound – physicians should be judicious as possible with regard both their prescribing rate and the number of morphine equivalents prescribed.

Finally, the article concludes with a pleasing close-up photograph of a tiger.

“Opioid-Prescribing Patterns of Emergency Physicians and Risk of Long-Term Use”
http://www.nejm.org/doi/full/10.1056/NEJMsa1610524

Thrombolysis and the Aging Brain

The bleeding complications of thrombolysis are well-described, but frequently under-appreciated in the acute setting. Stroke patients often disappear upstairs after treatment in the Emergency Department quickly enough that we rarely see the neurologic worsening associated with post-thrombolysis hemorrhage.

Risk factors for post-tPA ICH are well-known, but often difficult to precisely pin down for an individual patient. This study pools patients from up to 15 studies to evaluate the effect of leukoariosis on post-tPA hemorrhage. Leukoariosis, essentially, is a cerebral small vessel disease likely related to chronic ischemic damage. It has been long-recognized as a risk factor for increased hemorrhage and poor outcome, independent of age at treatment.

In this study, authors pooled approximately 5,500 patients, half of which were identified to have leukoariosis. The unadjusted absolute risk of symptomatic ICH in those without leukoariosis was 4.1%, while the risk of those with was 6.6%. Then, looking at the 2,700 patients with leukoariosis, those with mild disease had an unadjusted absolute risk of 4.0%, compared with 10.2% for those with moderate or severe. Similar trends towards worse functional outcomes were also seen with regards to worsening leukoariosis.

The moral of the story: the baseline health of the brain matters. When discussing the risks, benefits, and alternatives for informed consent with a family, these substantial risks in those patients with leukoariosis should be clearly conveyed with regards to appropriateness of tPA when otherwise potentially indicated.

“Leukoaraiosis, intracerebral hemorrhage, and functional outcome after acute stroke thrombolysis”

http://www.neurology.org/content/early/2017/01/27/WNL.0000000000003605.abstract

Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”
http://www.cmaj.ca/content/189/4/E139.full

Can We Trust Our Computer ECG Overlords?

If your practice is like my practice you see a lot of ECGs from triage. ECGs obtained for abdominal pain, dizziness, numbness, fatigue, rectal pain … and some, I assume, are for chest pain. Every one of these ECGs turns into an interruption for review to ensure no concerning evolving syndrome is missed.

But, a great number of these ECGs are read as “Normal” by the computer – and, anecdotally, are nearly universally correct.  This raises a very reasonable point as to question whether a human need be involved at all.

This simple study tries to examine the real-world performance of computer ECG reading, specifically, the Marquette 12SL software. Over a 16-week convenience sample period, 855 triage ECGs were performed, 222 of which were reported as “Normal” by the computer software. These 222 ECGs were all reviewed by a cardiologist, and 13 were ultimately assigned some pathology – of which all were mild, non-specific abnormalities. Two Emergency Physicians also then reviewed these 13 ECGs to determine what, if any, actions might be taken if presented to them in a real-world context. One of these ECGs was determined by one EP to be sufficient to put the patient in the next available bed from triage, while the remainder required no acute triage intervention. Retrospectively, the patient judged to have an actionable ECG was discharged from the ED and had a normal stress test the next day.

The authors conclude this negative predictive value for a “Normal” read of the ECG approaches 99%, and could potentially lead to changes in practice regarding immediate review of triage ECGs. While these findings have some limitations in generalizability regarding the specific ECG software and a relatively small sample, I think they’re on the right track. Interruptions in a multi-tasking setting lead to errors of task resumption, while the likelihood of significant time-sensitive pathology being missed is quite low. I tend to agree this could be a reasonable quality improvement intervention with prospective monitoring.

“Safety of Computer Interpretation of Normal Triage Electrocardiograms”
https://www.ncbi.nlm.nih.gov/pubmed/27519772

Discharged and Dropped Dead

The Emergency Department is a land of uncertainty. Generally a time-compressed, zero-continuity environment with limited resources, we frequently need to make relatively rapid decisions based on incomplete information. The goal, in general, is to treat and disposition patients in an advantageous fashion to prevent morbidity and mortality, while minimizing the costs and other harms.

The consequence of this confluence of factors leads, unfortunately, to a handful of patients who meet their unfortunate end following discharge. A Kaiser Permanente Emergency Department cohort analysis found 0.05% died within 7 days of discharge, and identified a few interesting risk factors regarding their outcomes. This new article, in the BMJ, describes the outcomes of a Medicare cohort following discharge – and finds both similarities and differences.

One notable difference, and a focus of the authors, is that 0.12% of patients discharged from the Emergency Department died within 7 days. This is a much larger proportion than the Kaiser cohort, however, the Medicare population is obviously a much older cohort with greater comorbidities. Then, they found similarities regarding the risks for death – most prominently, “altered mental status”. The full accounting of clinical features is described in the figure below:


Then, there were some system-level factors as well. Potentially, rural emergency departments and those with low annual volumes contributed in their multivariate model to increased risk of death. This data set is insufficient to draw any specific conclusions regarding these contributing factors, but it raises questions for future research. In general, however, this is interesting – and not terribly surprising data – even if it is hard to identify specific operational interventions based on these broad strokes.

“Early death after discharge from emergency departments: analysis of national US insurance claims data”
http://www.bmj.com/content/356/bmj.j239

The Intravenous Contrast Debate

Does intravenous contrast exposure increase the likelihood of developing renal insufficiency? The consensus opinion has been, generally, “yes”. However, evaluated under a closer lens, it is apparent some of these data come from high-dose use during angiography, from exposure to high-osmolar contrast material not routinely used in present day, and weak evidence from observational cohort studies.

The modern take is, increasingly, potentially “no”. However, it is virtually impossible to conclusively study the effect of intravenous contrast exposure. A prospective, controlled trial would require patients for whom a contrast study was believed important to their medical care be randomized to not receiving the indicated study, leading to all manner of potential harms. Therefore, we are reduced to looking backwards and comparing patients undergoing a contrasted study with those who do not.

This study is probably the best style of this type of evidence we are going to get. This is a propensity-matched analysis of patients undergoing contrast CT, non-contrast CT, and those not undergoing CT at all. Between 5,000 and 7,000 patients comprised each cohort, and these were stratified by baseline comorbidities, medications administered, illness severity indicators, and baseline renal function. After these various adjustments and weighting, the authors did not observe any effect on subsequent acute kidney injury relating to the administration of intravenous contrast – limited to patients with a creatinine of 4.0 mg/dL or below at baseline.

I think this is basically a reasonable conclusion, given the approach. There has been a fair bit of observational content regarding the risk of AKI after a contrast CT, but it is impossible separate the effect of contrast from the effects of the concurrent medical illness requiring the contrast CT. Every effort, of course, should be taken to minimize the use of advanced imaging – but in many instances, the morbidity of a missed diagnosis almost certainly outweighs the risk from intravenous contrast.

“Risk of Acute Kidney Injury After Intravenous Contrast Media Administration”
http://www.annemergmed.com/article/S0196-0644(16)31388-9/abstract