You’ve Got (Troponin) Mail

It’s tragic, of course, no one in this generation will understand the epiphany of logging on to America Online and being greeted by its almost synonymous greeting “You’ve got mail!” But, we and future generations may bear witness to the advent of something almost as profoundly uplifting: text-message troponin results.

These authors conceived and describe a fairly simple intervention in which test results – in this case, troponin – were pushed to clinicians’ phones as text messages. In a pilot and cluster-randomized trial with 1,105 patients for final analysis, these authors find the median interval from troponin result to disposition decision was 94 minutes in a control group, as compared with 68 minutes in the intervention cohort. However, a smaller difference in median overall length of stay did not reach statistical significance.

Now, I like this idea – even though this is clearly not the study showing generalizable definitive benefit. For many patient encounters, there is some readily identifiable bottleneck result of greatest importance for disposition. If a reasonable, curated list of these results are pushed to a mobile device, there is an obvious time savings with regard to manually pulling these results from the electronic health record.

In this study, however, the median LOS for these patients was over five hours – and their median LOS for all patients receiving at least one troponin was nearly 7.5 hours. The relative effect size, then, is really quite small. Next, there are always concerns relating to interruptions and unintended consequences on cognitive burden. Finally, it logically follows if this text message derives some of its beneficial effect by altering task priorities, then some other process in the Emergency Department is having its completion time increased.

I expect, if implemented in a typically efficient ED, the net result of any improvement might only be a few minutes saved across all encounter types – but multiplied across thousands of patient visits for chest pain, it’s still worth considering.

“Push-Alert Notification of Troponin Results to Physician Smartphones Reduces the Time to Discharge Emergency Department Patients: A Randomized Controlled Trial”
http://www.annemergmed.com/article/S0196-0644(17)30317-7/abstract

Use HEART, Or Whatever

The HEART score receives a lot of favorable press these days. It generally has face validity. It is probably superior in terms of discriminatory ability versus our venerable candidates such as TIMI and GRACE. It has been well-evaluated in multiple practice settings with reliable predictive value.

But, the final question for a decision instrument distilling a complex clinical scenario down to a five-question substrate for guiding evaluation and disposition – does it safely improve practice?

The answer is no – if you’re Dutch, in these Dutch hospitals.

In a stepped-wedge, cluster-randomized trial, these authors evaluated the effect of using HEART on patient outcomes and healthcare resource utilization. The three HEART risk categories carry general practice recommendations, in which low-risk (0-3) suggest early discharge, intermediate-risk (4-6) noninvasive testing, and high-risk (7-10) early invasive strategies. The comparator, “usual care” was, well, as usual.

With two cohorts comprising approximately 1,800 patients each, there were probably no reliable differences in care or outcomes demonstrated. The HEART low-risk cohort had a 2.0% 30-day incidence of MACE, which is similar to the safety profile described in other studies. However, the real goal of this evaluation was to determine acceptability and impact on resource utilization – and those results are decidedly mixed. Similar rates of early discharge from the ED, ED observation, inpatient admission, and downstream outpatient utilization were observed between the HEART cohort and usual care.

But, this answer from above – no impact on practice – is argued by these authors to be mostly related to non-adherence to the protocol recommendations. Most importantly, they note nearly a third of their low-risk patients were kept for prolonged ED or chest pain unit observation, and a handful more were admitted. The authors suggest there may be room for improvement in resource utilization, but they encountered entrenched cultural practice barriers.

This study was conducted between July 2013 and August 2014 – a long time ago, before most had heard of HEART. It is reasonable to suggest clinicians would now be more comfortable using this score for early discharge from the Emergency Department than during the trial period. It is probably also reasonable to suggest a more robust cultural effort backing practice change would improve adherence to recommendations – a collective departmental agreement associated with educational initiatives. Finally, usual care entailed early discharge of nearly 50% of all patients with chest pain, so your local baseline will affect whether a HEART-based protocol demonstrates improvement.

While these results in this trial are generally negative, what we see here is probably the floor for the effect of HEART on practice. At a minimum, it is as safe as advertised, and probably has room to demonstrate more robust beneficial effects on practice.

“Effect of Using the HEART Score in Patients With Chest Pain in the Emergency Department”

https://www.ncbi.nlm.nih.gov/pubmed/28437795

Troponin Sensitivity Training

High-sensitivity troponins are finally here! The FDA has approved the first one for use in the United States. Now, articles like this are not for purely academic interest – except, well, for the likely very slow percolation of these assays into standard practice.

This is a sort of update from the Advantageous Predictors of Acute Coronary Syndrome Evaluation (APACE) consortium. This consortium is intended to “advance the early diagnosis of [acute myocardial infarction]” – via use of these high-sensitivity assays for the benefit of their study sponsors, Abbott Laboratories et al. Regardless, this is one of those typical early rule-out studies evaluating the patients with possible acute coronary syndrome and symptoms onset within 12 hours. The assay performance was evaluated and compared in four different strategies: 0-hour limit of detection, 0-hour 99th percentile cut-off, and two 0/1-hour presentation and delta strategies.

And, of course, their rule-out strategies work great – they miss a handful of AMI, and even those (as documented by their accompanying table of missed AMI) are mostly tiny, did not undergo any revascularization procedure, and frequently did not receive clinical discharge diagnoses consistent with acute coronary syndrome. There was also a clear time-based element to their rule-out sensitivity, where patients with chest pain onset within two hours of presentation being more likely missed. But – and this is the same “but” you’ve heard so many times before – their sensitivity comes at the expense of specificity, and use of any of these assay strategies was effective at ruling out only half of all ED presentations. Interestingly, at least, their rule-out was durable – 30-day MACE was 0.1% or less, and the sole event was a non-cardiac death.

Is there truly any rush to adopt these assays? I would reasonably argue there must be value in the additive information provided regarding myocardial injury. This study and its algorithms, however, demonstrates there remains progress to be made in terms of clinical effectiveness – as obviously far greater than just 50% of ED presentations for chest pain ought be eligible for discharge.

“Direct Comparison of Four Very Early Rule-Out Strategies for Acute Myocardial Infarction Using High-Sensitivity Cardiac Troponin I”
http://circ.ahajournals.org/content/early/2017/03/10/CIRCULATIONAHA.116.025661

Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”
http://www.cmaj.ca/content/189/4/E139.full

Can We Trust Our Computer ECG Overlords?

If your practice is like my practice you see a lot of ECGs from triage. ECGs obtained for abdominal pain, dizziness, numbness, fatigue, rectal pain … and some, I assume, are for chest pain. Every one of these ECGs turns into an interruption for review to ensure no concerning evolving syndrome is missed.

But, a great number of these ECGs are read as “Normal” by the computer – and, anecdotally, are nearly universally correct.  This raises a very reasonable point as to question whether a human need be involved at all.

This simple study tries to examine the real-world performance of computer ECG reading, specifically, the Marquette 12SL software. Over a 16-week convenience sample period, 855 triage ECGs were performed, 222 of which were reported as “Normal” by the computer software. These 222 ECGs were all reviewed by a cardiologist, and 13 were ultimately assigned some pathology – of which all were mild, non-specific abnormalities. Two Emergency Physicians also then reviewed these 13 ECGs to determine what, if any, actions might be taken if presented to them in a real-world context. One of these ECGs was determined by one EP to be sufficient to put the patient in the next available bed from triage, while the remainder required no acute triage intervention. Retrospectively, the patient judged to have an actionable ECG was discharged from the ED and had a normal stress test the next day.

The authors conclude this negative predictive value for a “Normal” read of the ECG approaches 99%, and could potentially lead to changes in practice regarding immediate review of triage ECGs. While these findings have some limitations in generalizability regarding the specific ECG software and a relatively small sample, I think they’re on the right track. Interruptions in a multi-tasking setting lead to errors of task resumption, while the likelihood of significant time-sensitive pathology being missed is quite low. I tend to agree this could be a reasonable quality improvement intervention with prospective monitoring.

“Safety of Computer Interpretation of Normal Triage Electrocardiograms”
https://www.ncbi.nlm.nih.gov/pubmed/27519772

The Chest Pain Decision Instrument Trial

This is a bit of an odd trial. Ostensibly, this is a trial about the evaluation and disposition of low-risk chest pain presenting to the Emergency Department. The authors frame their discussion section by describing their combination of objective risk-stratification and shared decision-making in terms of reducing admission for observation and testing at the index visit.

But, that’s not technically what this trial was about. Technically, this was a trial about patient comprehension – the primary outcome is actually the number of questions correctly answered by patients on an immediate post-visit survey. The dual nature of their trial is evident in their power calculation, which starts with: “We estimated that 884 patients would provide 99% power to detect a 16% difference in patient knowledge between decision aid and usual care arms”, which is an unusual choice of beta and threshold for effect size – basically one additional question correct on their eight-question survey. The rest of their power calculation, however, makes sense “… and 90% power to detect a 10% difference in the proportion of patients admitted to an observation unit for cardiac testing.” It appears the trial was not conducted to test their primary outcome selected by their patient advocates designing the trial, but in actuality to test the secondary outcomes thought important to the clinicians.

So, it is a little hard to interpret their favorable result with respect to the primary outcome – 3.6 vs 4.2 questions answered correctly. After clinicians spent an extra 1.3 minutes (4.4 vs 3.1) with patients showing them a visual aid specific to their condition, I am not surprised patients had better comprehension of their treatment options – and they probably did not require a multi-center trial to prove this.

Then, the crossover between resource utilization and shared decision-making seems potentially troublesome. An idealized version of shared decision-making allows patients to participate in their treatment when there is substantial individual variation between the perceived value of different risks, benefits, and alternatives. However, I am not certain these patients are being invited to share in a decision between choices of equal value – and the authors seem to express this through their presentation of the results.

These are all patients without known coronary disease, normal EKGs, a negative initial cardiac troponin, and considered by treating clinicians to otherwise fall into a “low risk” population. This is a population matching the cohort of interest from Weinstock’s study of patients hospitalized for observation from the Emergency Department, 7,266 patients of whom none independently suffered a cardiac event while hospitalized.  A trial in British Columbia deferred admission for a cohort of patients in favor of outpatient stress tests.  By placing a fair bit of emphasis on their significant secondary finding of a reduction in observation admission from 52% to 37%, the authors seems to indicate their underlying bias is consistent with the evidence demonstrating the safety of outpatient disposition in this cohort.  In short, it seems to me the authors are not using their decision aid to help patients choose between equally valued clinical pathways, but rather to try and convince more patients to choose to be discharged.

In a sense, it represents offering patients a menu of options where overtreatment is one of them.  If a dyspneic patient meets PERC, we don’t offer them a visual aid where a CTPA is an option – and that shouldn’t be our expectation here, either.  These authors have put in tremendous effort over many years to integrate many important tools, but it feels like the end result is a demonstration of a shared decision-making instrument intended to nudge patients into choosing the disposition we think they ought, but are somehow afraid to outright tell them.

“Shared decision making in patients with low risk chest pain: prospective randomized pragmatic trial”
http://www.bmj.com/content/355/bmj.i6165.short

Don’t CTPA With Your Gut Alone

Many institutions are starting to see roll-out of some sort of clinical decision-support for imaging utilization. Whether it be NEXUS, Canadian Head CT, or Wells for PE, there is plenty of literature documenting improved yield following implementation.

This retrospective evaluation looks at what happens when you don’t obey your new robot overlords – and perform CTPA for pulmonary embolism outside the guideline-recommended pathway. These authors looked specifically at non-compliance at the low end – patients with a Wells score ≤4 and performed with either no D-dimer ordered or a normal D-dimer.

During their 1.5 year review period, there were 2,993 examinations and 589 fell out as non-compliant. Most – 563 – of these were low-risk by Wells and omitted the D-dimer. Yield for these was 4.4% positivity, compared with 11.2% for exams ordered following the guidelines. This is probably even a high-end estimate for yield, because this includes 8 (1.4%) patients who had subsegmental or indeterminate PEs but were ultimately anticoagulated, some of whom were undoubtedly false positives. Additionally, none of the 26 patients that were low-risk with a normal D-dimer were diagnosed with PE.

Now, the Wells criteria are just one tool to help reinforce gestalt for PE, and it is a simple rule that does not incorporate all the various factors with positive and negative likelihood ratios for PE. That said, this study should reinforce that low-risk patients should mostly be given the chance to avoid imaging, and a D-dimer can be used appropriately to rule-out PE in those where PE is a real, but unlikely, consideration.

“Yield of CT Pulmonary angiography in the emergency Department When Providers Override evidence-based clinical Decision support”
https://www.ncbi.nlm.nih.gov/pubmed/27689922

All Glory to the Triple-Rule-Out

The conclusions of this study are either ludicrous, or rather significant; the authors are either daft, or prescient. It depends fundamentally on your position regarding the utility of CT coronary angiograms.

This article describes a retrospective review of all the “Triple-Rule-Out” angiograms performed at a single center, Thomas Jefferson University Hospital, between 2006 and 2015. There were no specific circumstances under which the TRO were performed, but, grossly, the intended population were those who were otherwise being evaluated for an acute coronary syndrome but “was suspected of having additional noncoronary causes of chest pain”.

This “ACS-but-maybe-not” cohort totaled 1,192 patients over their 10 year study period. There were 970 (81.4%) with normal coronary arteries and no significant alternative diagnosis identified. The remaining, apparently to these authors, had “either a coronary or noncoronary diagnosis that could explain their presentation”, including 139 (11.7%) with moderate or severe coronary artery disease. In a mostly low-risk, troponin-negative population, it may be a stretch to attribute their symptoms to the coronary artery disease – but I digress.

The non-coronary diagnoses, the 106 (8.6%) with other findings, range from “important” to “not at all”. There were, at least, a handful of aortic dissections and pulmonary emboli picked up – though we can debate the likelihood of true positives based on pretest odds. However, these authors also credit the TRO with a range of sporadic findings as diverse as endocarditis, to diastasis of the sternum, and 24 cases of “aortic aneurysm” which were deemed important mostly because there were no priors for comparison.

The authors finally then promote TRO scans based on these noncoronary findings – stating that, if a traditional CTCA were performed, many of these diagnosis would likely be missed. Thus, the paradox. If you are already descending the circles of hell, and are using CTCA in the Emergency Department – then, yes, it is reasonable to suggest the TRO is a valid extension of the CTCA. Then again, if CTCA in the acute setting is already outside the scope of practice, and TRO is an abomination – carry on as if this study never existed.

“Diagnostic Yield of Triple-Rule-Out CT in an Emergency Setting”
http://www.ncbi.nlm.nih.gov/pubmed/27186867

The High-Sensitivity Troponin Ennui

They’re coming. It’s inevitable. They have yet to be approved in the the United States, but every year the news is the same: they’re coming.

High-sensitivity troponins have been both lauded and mocked from various perspectives. The literature is replete with examples of expedited rule-outs in the Emergency Department owing to their improved lower limit of detection for myocardial injury. However, every study touting the benefits of improved sensitivity has begrudgingly or worse acknowledged the correspondingly diminished specificity.

This, then, is a randomized trial of reporting either a conventional troponin assay result or a high-sensitivity troponin assay result, with a multitude of patient-oriented short- and long-term outcomes measured. The specific assays used here were either a c-TnT with a threshold of detection of 30 ng/L, or a hs-TnT with a threshold of detection of 3 ng/L. Clinicians caring for patients were randomized to making care decisions based on one, without knowledge of the other.

For all the various propaganda for and against high-sensitivity troponins, this trial is highly anticlimactic. There were, essentially, no changes in physician behavior resulting from the additional information provided by the more sensitive assay. No fewer patients were admitted, similar numbers of ultimate downstream tests occurred, and there were no reliable differences in long-term cardiac or combined endpoint outcomes.

The only outcome of note is probably consistent with what we already knew: any circulating troponin portends worse outcomes. This may be most helpful in directing the long-term medical management of those whose troponin levels were previously undetectable with a conventional assay; these patients clearly do not have the same virtually-zero risk as a patient with undetectable troponin levels. Indeed, troponin levels alone were a better predictor of long terms outcomes than the Heart Foundation Risk Stratification, as well.

I’ll let Judd Hollander sum it up in his most concise – with a link to much more verbose – terms:

“Randomized Comparison of High-Sensitivity Troponin Reporting in Undifferentiated Chest Pain Assessment”
http://circoutcomes.ahajournals.org/content/early/2016/08/09/CIRCOUTCOMES.115.002488.abstract

Perpetuating the Flawed Approach to Chest Pain

Everyone has their favored chest pain accelerated diagnostic risk-stratification algorithm or pathway these days.  TIMI, HEART, ADAPT, MACS, Vancouver, EDACS – the list goes on and on.  What has become painfully clear from this latest article, however, is this approach is fundamentally flawed.

This is a prospective effectiveness trial comparing ADAPT to EDACS in the New Zealand population.  Each “chest pain rule-out” was randomized to either the ADAPT pathway – using modified TIMI, ECG, and 0- and 2-hour troponins – or the EDACS pathway – which is its own unique scoring system, ECG, and 0- and 2-hour troponins.  The ADAPT pathway classified 30.8% of these patients as “low risk”, while the EDACS classified 41.6% as such.  Despite this, their primary outcome – patients discharged from the ED within 6 hours – non-significantly favored the ADAPT group, 34.4% vs 32.3%.

To me, this represents a few things.

We are still have an irrational, cultural fear of chest pain.  Only 11.6% of their total cohort had STEMI or NSTEMI, and another 5.7% received a diagnosis of “unstable angina”.  Thus, potentially greater than 50% of patients were still hospitalized unnecessarily.  Furthermore, this cultural fear of chest pain was strong enough to prevent acceptance of the more-aggressive EDACS decision instrument being tested in this study.  A full 15% of low-risk patients by the EDACS instrument failed to be discharged within 6 hours, despite their evaluation being complete following 2-hour troponin testing.

But, even these observations are a digression from the core hypothesis: ADPs are a flawed approach.  Poor outcomes are such the rarity, and so difficult to predict, that our thought process ought be predicated on a foundation that most patients will do well, regardless, and only the highest-risk should stay in the hospital.  Our decision-making should probably be broken down into three steps:

  • Does this patient have STEMI/NSTEMI/true UA?  This is the domain of inquiry into high-sensitivity troponin assays.
  • Does the patient need any provocative testing at all?  I.e., the “No Objective Testing Rule”.
  • Finally, are there “red flag” clinical features that preclude outpatient provocative testing?  The handful of patients with concerning EKG changes, crescendo symptoms, or other high-risk factors fall into this category.

If we are doing chest pain close to correctly, the numbers from this article would be flipped – rather than ~30% being discharged, we ought to be ~70%.

“Effectiveness of EDACS Versus ADAPT Accelerated Diagnostic Pathways for Chest Pain: A Pragmatic Randomized Controlled Trial Embedded Within Practice”