The Top “Overuse” of 2016

Another entry in JAMA Internal Medicine’s lovely “Less is More” series, this is a “systematic review” of the previous year’s literature regarding potentially unnecessary care. Living here in the asylum, it seems all our fellow inmates and I are consigned to issuing weather reports from the tempest – but, hey, baby steps.

Their “systematic review” is not particularly rigorous.  It’s basically a literature search, followed by a subjective distillation by author consensus to those considered to be the most potentially impactful – but, regardless, their list is worth reviewing. Without further ado, the highlights of their ten selections:

  • Transesophageal echocardiography is more informative than transthoracic in illuminating the etiology of a stroke, but the additive information does not have a clear downstream benefit on outcomes.
  • Patients undergoing computed tomography to rule out pulmonary embolism without algorithm-compliant use of D-dimer suffer from overuse and low-value testing.
  • CT use increased in all Emergency Department patients with respiratory symptoms, with no evidence of downstream change in prescribing, hospital admission, or mortality.
  • Supplemental oxygen does not demonstrate benefit in patients with chronic obstructive pulmonary disease and mild exertional hypoxia.
  • Small improvements in antibiotic prescribing were seen when comparisons to peers were performed.
  • A shared decision-making implementation for Emergency Department patients with chest pain increased patient engagement and demonstrated a secondary effect of diminished admission and cardiac testing.

Wizard.

“2017 Update on Medical Overuse: A Systematic Review”
https://www.ncbi.nlm.nih.gov/pubmed/28973402

Are We Killing People With 30-Day Readmission Targets?

Ever since the Center for Medicare and Medicaid Services announced their intention to penalize hospitals for early readmissions, folks have been worrying about the obvious consequences: would a focus on avoidance place patients at risk? Would patients best served in the hospital be pushed into other settings for suboptimal care?

That is the argument made in this short piece in the Journal of the American College of Cardiology. They look backwards at the last two decades of heart failure readmissions and short-term mortality, and take issue with the fundamental underlying premise of the quality measure, the inequities associated with the measure, and potential unintended harms. Their most illustrative example: when patients die outside the hospital within 30-days, paradoxically, they contribute to apparent improved performance in healthcare quality, as measured by 30-day readmission.

They back up their point by using the aggregate data analyzing readmissions between 2008 and 2014, published previously in JAMA, and focusing primarily on the heart failure component. In the original JAMA analysis, the evaluation paired individual hospital monthly readmission and risk-adjusted mortality, and were unable to identify an increased risk of death relating to reductions in 30-day readmissions. These authors say: too much tree, not enough forest. In the decade prior to announcements of 30-day readmission penalties, 30-day heart failure mortality had dropped 16.2%, but over the analysis period, 30-day heart failure mortality was back on the rise. In 2008 the 30-day mortality was 7.9% and by 2014 it was up to 9.2%, a 16.5% increase, and an even larger increase relative to the pre-study trend with decreasing mortality.

These are obviously two very different ways of looking at the same data, but the implication is fair: those charged with developing a quality measure should be able to conclusively demonstrate its effectiveness and safety. If any method of analysis raises concerns regarding the accepted balance of value and harm, the measure should be placed on a probationary status while rigorous re-evaluation proceeds.

“The Hospital Readmission Reduction Program Is Associated With Fewer Readmissions, More Deaths”
http://www.sciencedirect.com/science/article/pii/S0735109717393610

Predicting Poor Outcomes After Syncope

Syncope is a classic good news/bad news presenting complaint. It can be highly distressing to patients and family members, but rarely does it relate to an acutely serious underlying cause. That’s the good news. The bad news, however, is that for those with the worst prognosis, most of the poor prognostic features are unmodifiable.

This is a prospective, observational study of patients presenting with syncope to Emergency Departments in Canada, with the stated goal of developing a risk model for poor outcomes after syncope. The composite outcome of interest was death, arrhythmia, or interventions to treat arrhythmias within 30 days of ED disposition. Follow-up was performed by structured telephone interview, networked hospital record review, and Coroner’s Office record search.

To achieve a lower bound of the 95% confidence interval for sensitivity of 96.4%, these authors targeted a sample size of 5,000 patients, and ultimately enrolled 5,010 with complete outcome assessments. The mean age was 53.4, had a low incidence of comorbid medical conditions, and only 9.5% were admitted to the hospital. Within 30 days, 22 had died, 15 from unknown causes and the others from the pool of 91 patients diagnosed with a “serious arrhythmia” – sinus node dysfunction, atrial fibrillation, AV block, ventricular arrhythmia, supraventricular tachycardia, or requiring a pacemaker insertion.

These authors ride the standard merry-go-round of statistical analysis, bootstrapping, and logistic regression to determine a prediction rule – the Canadian Syncope Arrhythmia Risk Score – an eight element additive and subtractive scoring system to stratify patients into one of eleven expected risk categories. They report the test characteristics of their proposed clinically useful threshold, greater than 0, to be a sensitivity of 97.1% and a specificity of 53.4% – a weak positive predictive value of 4.4% considering the low incidence of the composite outcome.

This is yet another product of obviously excellent work from the risk model machines in Canada, but, again, of uncertain clinical value. The elements of the risk model are frankly those that are quite obvious: elevated troponin and conduction delays on EKG, along with an absence of classic vasovagal features. These are patients whose cardiac function is obviously impaired, but short a time machine to go back and fix those hearts before they became sick, it’s a bit difficult to see the path forward. These authors feel their prediction rule aids in safe discharge of patients with syncope, although these patients are already infrequently admitted to the hospital in Canada. The various members of their composite outcome are not equally serious, preventable, or treatable, limiting the potential management options for even those falling into their high-risk group.

As with any decision instrument, its value remains uncertain until it is demonstrated the clinical decisions supplemented by this rule lead to better patient-oriented outcomes and/or resource utilization than our current management in this cohort.

“Predicting Short-Term Risk of Arrhythmia among Patients with Syncope: The Canadian Syncope Arrhythmia Risk Score”

https://www.ncbi.nlm.nih.gov/pubmed/28791782

Neither Benefit Nor Harm Seen With Oxygen in Myocardial Infarction

We’ve been hanging on to the biological hypothesis of treating ischemia with supplemental oxygen for quite some time – despite some evidence to the contrary with regard to damage from oxygen free radical formation. What’s needed is a large, randomized trial – and so we have DETOX2-AMI, run through the SWEDEHEART trial registry.

This trial randomized individual patients with suspected or known myocardial infarction to continuous oxygen therapy or ambient air.  Patients were excluded from enrollment if they had oxygen saturation below 90% at baseline, or were not Swedish national citizens as necessary for long-term follow-up. These patients actually received fairly vigorous oxygen therapy, far exceeding the typical nasal cannula oxygen we see on patients arriving via EMS – patients randomized to the oxygen arm received 6 liters per minute via face mask for 6 to 12 hours.

Over the 1.5 year trial period, these authors enrolled 6,629 patients, generally evenly matched with regard to baseline clinical characteristics, and 75% of whom ultimately had a final diagnosis of myocardial infarction.  Detailed outcomes, owing to the underlying registry infrastructure, are scant – as compared to the AVOID trial, in which many patients underwent cardiac MRI to evaluate infarct size and ejection fraction. What you get are the hard outcomes: death and rehospitalization with myocardial infarction – and there is no difference, both in the short- or long-term, and in both the intention-to-treat and per-protocol analyses. The authors also include median highest troponin T as a surrogate for infarct severity and morbidity, and there is no difference there, either.

The underlying hypothesis here was to demonstrate a beneficial effect to oxygen in myocardial infarction – defined as a clinically relevant effect size of 20% lower relative risk of death – and that threshold was clearly not met.  There are some small differences with regard to oxygen delivery, as compared to AVOID, with the AVOID trial delivering oxygen at a much higher concentration.  But, effectively, the takeaway from these data is: oxygen just probably doesn’t matter enough to be clinically relevant. There’s no reason to be condescending and militant about taking the oxygen off a patient with myocardial infarction, and likewise it’s reasonable to consider it a wasteful intervention with regard to canistered oxygen supply.

Finally, just for fun, to recap the anachronistic acronym MONA:

Morphine – Possible small harms, as relating to inhibition of antiplatelet agents.
Oxygen – Almost certainly irrelevant with regard to clinical outcomes.
Nitroglycerin – Likely irrelevant with regard to clinical outcomes.
Aspirin – Still good!

“Oxygen Therapy in Suspected Acute Myocardial Infarction”
http://www.nejm.org/doi/full/10.1056/NEJMoa1706222

Questioning the Benefit of Non-Invasive Testing for Chest Pain

Welcome to the fascinating world of instrumental variable analysis!

This is a retrospective cohort analysis of a large insurance claims database attempting to glean insight into the value of non-invasive testing for patients presenting to the Emergency Department with chest pain. Previous version of the American Heart Association guidelines for the evaluation of so-called “low risk” chest pain have encouraged patients to undergo some sort of objective testing with 72 hours of initial evaluation. These recommendations have waned in more recent iterations of the guideline, but many settings still routinely recommend admission and observation following an episode of chest pain.

These authors used a cohort of 926,633 unique admissions for chest pain and analyzed them to evaluate any downstream effects on subsequent morbidity and resource utilization.  As part of this analysis, they also split the cohort into two groups for comparison based on the day of the week of presentation – hence the “instrumental variable” for the instrumental variable analysis performed alongside their multivariate analysis. The authors made assumptions that individual patient characteristics would be unrelated to the day of presentation, but that downstream test frequency would. The authors then use this difference in test frequency to thread the eye of the needle as a pseudo-randomization component to aid in comparison.

There were 571,988 patients presenting on a weekday, 18.1% and 26.1% of which underwent some non-invasive testing within 2 and 30 days of an ED visit, respectively. Then, there were 354,645 patients presenting on a weekend, with rates of testing 12.3% and 21.3%. There were obvious baseline differences between those undergoing testing and those who did not, and those were controlled for using multivariate techniques as well as the aforementioned instrument variable analysis.

Looking at clinical outcomes – coronary revascularization and acute MI at one year – there were mixed results: definitely more revascularization procedures associated with exposure to non-invasive testing, no increase in downstream diagnosis of AMI. The trend, if any, is actually towards increased diagnoses of AMI. The absolute numbers are quite small, on the order of a handful of extra AMIs per 1,000 patients per year, and may reflect either the complications resulting from stenting or a propensity to receive different clinical diagnoses for similar presentations after receiving a coronary stent.  Or, owing to the nature of the analysis, the trend may simply be noise.

The level of evidence here is not high, considering its retrospective nature and dependence on statistical adjustments.  It also cannot determine whether there are longer-term consequences or benefits beyond its one-year follow-up time-frame. Its primary value is in the context of the larger body of evidence.  At the least, it suggests we have equipoise to examine which, if any, patients ought to be referred for routine follow-up – or whether the role of the ED should be limited to ruling out an acute coronary syndrome, and the downstream medical ecosystem is the most appropriate venue for determining further testing when indicated.

“Cardiovascular Testing and Clinical Outcomes in Emergency Department Patients With Chest Pain”

http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2633257

The Door-to-Lasix Quality Measure

Will [door-to-furosemide] become the next quality measure in modern HF care? Though one could understand enthusiasm to do so ….

No.

No one would understand such enthusiasm, despite the hopeful soaring rhetoric of the editorial accompanying this article. That enthusiasm will never materialize.

The thrills stacked to the ceiling here are based on the data in the REALITY-AHF registry, a multi-center, prospective, observational cohort designed to collect data on treatments administered in the acute phase of heart failure treatment in the Emergency Department.  Twenty hospitals, mixed between academic and community, in Japan participated.  Time-to-furosemide, based on the authors’ review of prior evidence, was prespecified as particular data point of interest.

They split their cohort of 1,291 analyzed patients between “early” and “non-early” furosemide administration, meaning within 60 minutes of ED arrival and greater than 60 minutes. Unadjusted mortality was 2.3% in the early treatment group and 6% in the non-early – and similar, but slightly smaller, differences persisted after multivariate adjustment and propensity matching. The authors conclude, based on these observations, the association between early furosemide treatment and mortality may be clinically important.

Of course, any observational cohort is not able to make the leap from association to causation.  It is, however, infeasible to randomize patients with acute heart failure to early vs. non-early furosemide – so this is likely close to the highest level of evidence we will receive.  As always, any attempt at adjustment and propensity matching will always be limited by unmeasured confounders, despite incorporating nearly 40 different variables. Finally, patients with pre-hospital diuretic administration were excluded, which is a bit odd, as it would make for an interesting comparison group on its own.

All that said, I do believe their results are objectively valid – if clinically uninterpretable. The non-early furosemide cohort includes both patients who received medication in the first couple hours of their ED stay, as well as those whose first furosemide dose was not given until up to 48 hours after arrival.  This probably turns the heart of the comparison into “appropriately recognized” and “possibly mismanaged”, rather than a narrow comparison of simply furosemide, early vs. not.  Time may indeed matter – but the heterogeneity of and clinical trajectory of patients treated between 60 minutes and 48 hours after ED arrival defies collapse into a dichotomous “early vs. non-early” comparison.

And this certainly ought not give rise to another nonsensical time-based quality metric imposed upon the Emergency Department.

“Time-to-Furosemide Treatment and Mortality in Patients Hospitalized With Acute Heart Failure”

http://www.onlinejacc.org/content/69/25/3042

Troponin Sensitivity Training

High-sensitivity troponins are finally here! The FDA has approved the first one for use in the United States. Now, articles like this are not for purely academic interest – except, well, for the likely very slow percolation of these assays into standard practice.

This is a sort of update from the Advantageous Predictors of Acute Coronary Syndrome Evaluation (APACE) consortium. This consortium is intended to “advance the early diagnosis of [acute myocardial infarction]” – via use of these high-sensitivity assays for the benefit of their study sponsors, Abbott Laboratories et al. Regardless, this is one of those typical early rule-out studies evaluating the patients with possible acute coronary syndrome and symptoms onset within 12 hours. The assay performance was evaluated and compared in four different strategies: 0-hour limit of detection, 0-hour 99th percentile cut-off, and two 0/1-hour presentation and delta strategies.

And, of course, their rule-out strategies work great – they miss a handful of AMI, and even those (as documented by their accompanying table of missed AMI) are mostly tiny, did not undergo any revascularization procedure, and frequently did not receive clinical discharge diagnoses consistent with acute coronary syndrome. There was also a clear time-based element to their rule-out sensitivity, where patients with chest pain onset within two hours of presentation being more likely missed. But – and this is the same “but” you’ve heard so many times before – their sensitivity comes at the expense of specificity, and use of any of these assay strategies was effective at ruling out only half of all ED presentations. Interestingly, at least, their rule-out was durable – 30-day MACE was 0.1% or less, and the sole event was a non-cardiac death.

Is there truly any rush to adopt these assays? I would reasonably argue there must be value in the additive information provided regarding myocardial injury. This study and its algorithms, however, demonstrates there remains progress to be made in terms of clinical effectiveness – as obviously far greater than just 50% of ED presentations for chest pain ought be eligible for discharge.

“Direct Comparison of Four Very Early Rule-Out Strategies for Acute Myocardial Infarction Using High-Sensitivity Cardiac Troponin I”
http://circ.ahajournals.org/content/early/2017/03/10/CIRCULATIONAHA.116.025661

Done Fall Out

Syncope! Not much is more frightening to patients – here they are, minding their own business and then … the floor. What caused it? Will it happen again? Sometimes, there is an obvious cause – and that’s where the fun ends.

This is the ACC/AHA guideline for evaluation of syncope – and, thankfully, it’s quite reasonable. I attribute this, mostly (and possibly erroneously) to the fantastic ED syncope guru Ben Sun being on the writing committee. Only a very small part of this document is devoted to the initial evaluation of syncope in the Emergency Department, and their strong recommendations boil down to:

  • Perform a history and physical examination
  • Perform an electrocardiogram
  • Try to determine the cause of syncope, and estimate short- and long-term risk
  • Don’t send people home from the hospital if you identify a serious medical cause

These are all straightforward things we already routinely do as part of our basic evaluation of syncope. They go on further to clearly state, with weaker recommendations, there are no other mandated tests – and that routine screening bloodwork, imaging, or cardiac testing is likely of no value.

With regard to disposition:

“The disposition decision is complicated by varying resources available for immediate testing, a lack of consensus on acceptable short-term risk of serious outcomes, varying availability and expertise of outpatient diagnostic clinics, and the lack of data demonstrating that hospital-based evaluation improves outcomes.”

Thus, the authors allow for a wide range of possible disposition decisions, ranging from ED observation on a structured protocol to non-specific outpatient management.

The rest of the document provides recommendations more relevant to cardiology management of those with specific medical causes identified, although tables 5, 6, and 7 do a fairly nice job of summarizing some of the risk-factors for serious outcomes, and some of the highlights of syncope risk scores.  While it doesn’t provide much concrete guidance, it at least does not set any low-value medicolegal precedent limiting your ability to make appropriate individual treatment decisions.

“2017 ACC/AHA/HRS Guideline for the Evaluation and Management of Patients With Syncope”
http://circ.ahajournals.org/content/early/2017/03/09/CIR.0000000000000499

The Failing Ottawa Heart

Canada! So many rules! The true north strong and free, indeed.

This latest innovation is the Ottawa Heart Failure Risk Scale – which, if you treat it explicitly as titled, is accurate and clinically interesting. However, it also masquerades as a decision rule – upon which it is of lesser standing.

This is a prospective observational derivation of a risk score for “serious adverse events” in an ED population diagnosed with acute heart failure and potential candidates for discharge. Of these 1,100 patients, 170 (15.5%) suffered an SAE – death, myocardial infarction, hospitalization. They used the differences between the groups with and without SAEs to derive a predictive risk score, the elements of which are:

• History of stroke or TIA (1)
• History of intubation for respiratory distress (2)
• Heart rate on ED arrival ≥110 (2)
• Room are SaO2 <90% on EMS or ED arrival (1)
• ECG with acute ischemic changes (2)
• Urea ≥12 mmol/L (1)

This scoring system ultimately provided a prognostic range from 2.8% for a score of zero, up to 89.0% at the top of the scale. This information is – at least within the bounds of generalizability from their study population – interesting from an informational standpoint. However, they then take it to the next level and use this as a potential decision instrument for admission versus discharge – projecting a score ≥2 would decrease admission rates while still maintaining a similar sensitivity for SAEs.

However, the foundational flaw here is the presumption admission is protective against SAEs – both here in this study and in our usual practice. Without a true, prospective validation, we have no evidence this change in and its potential decrease in admissions improves any of many potential outcome measures. Many of their SAEs may not be preventable, nor would the protections from admission be likely durable out to the end of their 14-day follow-up period. Patients were also managed for up to 12 hours in their Emergency Department before disposition, a difficult prospect for many EDs.

Finally, regardless, the complexity of care management and illness trajectory for heart failure is not a terribly ideal candidate for simplification into a dichotomous rule with just a handful of criteria. There were many univariate differences between the two groups – and that’s simply on the variables they chose to collect The decision to admit a patient for heart failure is not appropriately distilled into a “rule” – but this prognostic information may yet be of some value.

“Prospective and Explicit Clinical Validation of the Ottawa Heart Failure Risk Scale, With and Without Use of Quantitative NT-proBNP”

http://onlinelibrary.wiley.com/doi/10.1111/acem.13141/abstract

Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”
http://www.cmaj.ca/content/189/4/E139.full