Troponin Sensitivity Training

High-sensitivity troponins are finally here! The FDA has approved the first one for use in the United States. Now, articles like this are not for purely academic interest – except, well, for the likely very slow percolation of these assays into standard practice.

This is a sort of update from the Advantageous Predictors of Acute Coronary Syndrome Evaluation (APACE) consortium. This consortium is intended to “advance the early diagnosis of [acute myocardial infarction]” – via use of these high-sensitivity assays for the benefit of their study sponsors, Abbott Laboratories et al. Regardless, this is one of those typical early rule-out studies evaluating the patients with possible acute coronary syndrome and symptoms onset within 12 hours. The assay performance was evaluated and compared in four different strategies: 0-hour limit of detection, 0-hour 99th percentile cut-off, and two 0/1-hour presentation and delta strategies.

And, of course, their rule-out strategies work great – they miss a handful of AMI, and even those (as documented by their accompanying table of missed AMI) are mostly tiny, did not undergo any revascularization procedure, and frequently did not receive clinical discharge diagnoses consistent with acute coronary syndrome. There was also a clear time-based element to their rule-out sensitivity, where patients with chest pain onset within two hours of presentation being more likely missed. But – and this is the same “but” you’ve heard so many times before – their sensitivity comes at the expense of specificity, and use of any of these assay strategies was effective at ruling out only half of all ED presentations. Interestingly, at least, their rule-out was durable – 30-day MACE was 0.1% or less, and the sole event was a non-cardiac death.

Is there truly any rush to adopt these assays? I would reasonably argue there must be value in the additive information provided regarding myocardial injury. This study and its algorithms, however, demonstrates there remains progress to be made in terms of clinical effectiveness – as obviously far greater than just 50% of ED presentations for chest pain ought be eligible for discharge.

“Direct Comparison of Four Very Early Rule-Out Strategies for Acute Myocardial Infarction Using High-Sensitivity Cardiac Troponin I”
http://circ.ahajournals.org/content/early/2017/03/10/CIRCULATIONAHA.116.025661

Done Fall Out

Syncope! Not much is more frightening to patients – here they are, minding their own business and then … the floor. What caused it? Will it happen again? Sometimes, there is an obvious cause – and that’s where the fun ends.

This is the ACC/AHA guideline for evaluation of syncope – and, thankfully, it’s quite reasonable. I attribute this, mostly (and possibly erroneously) to the fantastic ED syncope guru Ben Sun being on the writing committee. Only a very small part of this document is devoted to the initial evaluation of syncope in the Emergency Department, and their strong recommendations boil down to:

  • Perform a history and physical examination
  • Perform an electrocardiogram
  • Try to determine the cause of syncope, and estimate short- and long-term risk
  • Don’t send people home from the hospital if you identify a serious medical cause

These are all straightforward things we already routinely do as part of our basic evaluation of syncope. They go on further to clearly state, with weaker recommendations, there are no other mandated tests – and that routine screening bloodwork, imaging, or cardiac testing is likely of no value.

With regard to disposition:

“The disposition decision is complicated by varying resources available for immediate testing, a lack of consensus on acceptable short-term risk of serious outcomes, varying availability and expertise of outpatient diagnostic clinics, and the lack of data demonstrating that hospital-based evaluation improves outcomes.”

Thus, the authors allow for a wide range of possible disposition decisions, ranging from ED observation on a structured protocol to non-specific outpatient management.

The rest of the document provides recommendations more relevant to cardiology management of those with specific medical causes identified, although tables 5, 6, and 7 do a fairly nice job of summarizing some of the risk-factors for serious outcomes, and some of the highlights of syncope risk scores.  While it doesn’t provide much concrete guidance, it at least does not set any low-value medicolegal precedent limiting your ability to make appropriate individual treatment decisions.

“2017 ACC/AHA/HRS Guideline for the Evaluation and Management of Patients With Syncope”
http://circ.ahajournals.org/content/early/2017/03/09/CIR.0000000000000499

The Failing Ottawa Heart

Canada! So many rules! The true north strong and free, indeed.

This latest innovation is the Ottawa Heart Failure Risk Scale – which, if you treat it explicitly as titled, is accurate and clinically interesting. However, it also masquerades as a decision rule – upon which it is of lesser standing.

This is a prospective observational derivation of a risk score for “serious adverse events” in an ED population diagnosed with acute heart failure and potential candidates for discharge. Of these 1,100 patients, 170 (15.5%) suffered an SAE – death, myocardial infarction, hospitalization. They used the differences between the groups with and without SAEs to derive a predictive risk score, the elements of which are:

• History of stroke or TIA (1)
• History of intubation for respiratory distress (2)
• Heart rate on ED arrival ≥110 (2)
• Room are SaO2 <90% on EMS or ED arrival (1)
• ECG with acute ischemic changes (2)
• Urea ≥12 mmol/L (1)

This scoring system ultimately provided a prognostic range from 2.8% for a score of zero, up to 89.0% at the top of the scale. This information is – at least within the bounds of generalizability from their study population – interesting from an informational standpoint. However, they then take it to the next level and use this as a potential decision instrument for admission versus discharge – projecting a score ≥2 would decrease admission rates while still maintaining a similar sensitivity for SAEs.

However, the foundational flaw here is the presumption admission is protective against SAEs – both here in this study and in our usual practice. Without a true, prospective validation, we have no evidence this change in and its potential decrease in admissions improves any of many potential outcome measures. Many of their SAEs may not be preventable, nor would the protections from admission be likely durable out to the end of their 14-day follow-up period. Patients were also managed for up to 12 hours in their Emergency Department before disposition, a difficult prospect for many EDs.

Finally, regardless, the complexity of care management and illness trajectory for heart failure is not a terribly ideal candidate for simplification into a dichotomous rule with just a handful of criteria. There were many univariate differences between the two groups – and that’s simply on the variables they chose to collect The decision to admit a patient for heart failure is not appropriately distilled into a “rule” – but this prognostic information may yet be of some value.

“Prospective and Explicit Clinical Validation of the Ottawa Heart Failure Risk Scale, With and Without Use of Quantitative NT-proBNP”

http://onlinelibrary.wiley.com/doi/10.1111/acem.13141/abstract

Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”
http://www.cmaj.ca/content/189/4/E139.full

Can We Trust Our Computer ECG Overlords?

If your practice is like my practice you see a lot of ECGs from triage. ECGs obtained for abdominal pain, dizziness, numbness, fatigue, rectal pain … and some, I assume, are for chest pain. Every one of these ECGs turns into an interruption for review to ensure no concerning evolving syndrome is missed.

But, a great number of these ECGs are read as “Normal” by the computer – and, anecdotally, are nearly universally correct.  This raises a very reasonable point as to question whether a human need be involved at all.

This simple study tries to examine the real-world performance of computer ECG reading, specifically, the Marquette 12SL software. Over a 16-week convenience sample period, 855 triage ECGs were performed, 222 of which were reported as “Normal” by the computer software. These 222 ECGs were all reviewed by a cardiologist, and 13 were ultimately assigned some pathology – of which all were mild, non-specific abnormalities. Two Emergency Physicians also then reviewed these 13 ECGs to determine what, if any, actions might be taken if presented to them in a real-world context. One of these ECGs was determined by one EP to be sufficient to put the patient in the next available bed from triage, while the remainder required no acute triage intervention. Retrospectively, the patient judged to have an actionable ECG was discharged from the ED and had a normal stress test the next day.

The authors conclude this negative predictive value for a “Normal” read of the ECG approaches 99%, and could potentially lead to changes in practice regarding immediate review of triage ECGs. While these findings have some limitations in generalizability regarding the specific ECG software and a relatively small sample, I think they’re on the right track. Interruptions in a multi-tasking setting lead to errors of task resumption, while the likelihood of significant time-sensitive pathology being missed is quite low. I tend to agree this could be a reasonable quality improvement intervention with prospective monitoring.

“Safety of Computer Interpretation of Normal Triage Electrocardiograms”
https://www.ncbi.nlm.nih.gov/pubmed/27519772

Taking Post-Arrest to the Cath Lab

There has been a fair bit of debate regarding the utility of taking post-arrest patients to cardiac catheterization. Clearly, ST-elevation myocardial infarction should receive intervention – although, it can sometimes be challenging to identify on post-arrest EKG. Much less has been determined regarding the treatment of those without STEMI.

This is – as is most of the relevant literature – a retrospective review of patients with cardiac arrest, as identified from a multi-center therapeutic hypothermia registry. These authors record the location of arrest, previously known coronary artery disease, the initial rhythm as shockable or unshockable, and EKG findings. They defined clinically important CAD by the presence of an intervention following cardiac catheterization, including PCI, stenting, or coronary artery bypass grafting.

Entertainingly, the authors hypothesis is “the incidence of coronary intervention would be uncommon (<5%)” – which, if it truly is their hypothesis, it is contradicted by most of their citations, including a meta-analysis citing an overall incidence of CAD in post-arrest patients ranging from 59-71%. Regardless, there were 1,396 patients with known initial rhythms, about 2/3rds of which were non-shockable. About 60% of shockable rhythms and 20% of unshockable rhythms underwent cardiac catheterization. After removing those with obvious STEMI on their EKG, there were 97 patients in their cohort of interest, 24 (24.7%) of whom underwent intervention.

This, therefore, is the “unexpectedly high” incidence of coronary intervention in this non-shockable rhythm cohort without STEMI on EKG. However, as these authors do appropriately note, these data should not specifically inform practice change. The findings in those patients undergoing catheterization are skewed by selection bias, including measured and unmeasured confounders influencing the decision to take patients for potential intervention. In an older population characteristic of a cardiac arrest cohort, some coronary disease is likely on any diagnostic test – and, in this clinical context, it seems intervention would be much more likely than not. Finally, intervention does not equate to a culprit lesion for cardiac arrest, further distancing these results as a surrogate for patient-oriented outcomes.

Despite the “surprise” these authors report, they likely overestimate any evidence for benefit in this post-arrest population, and better characterization of specific high-yield circumstances is needed.

“Incidence of coronary intervention in cardiac arrest survivors with non-shockable initial rhythms and no evidence of ST-elevation MI (STEMI)”

https://www.ncbi.nlm.nih.gov/pubmed/27888672

The Chest Pain Decision Instrument Trial

This is a bit of an odd trial. Ostensibly, this is a trial about the evaluation and disposition of low-risk chest pain presenting to the Emergency Department. The authors frame their discussion section by describing their combination of objective risk-stratification and shared decision-making in terms of reducing admission for observation and testing at the index visit.

But, that’s not technically what this trial was about. Technically, this was a trial about patient comprehension – the primary outcome is actually the number of questions correctly answered by patients on an immediate post-visit survey. The dual nature of their trial is evident in their power calculation, which starts with: “We estimated that 884 patients would provide 99% power to detect a 16% difference in patient knowledge between decision aid and usual care arms”, which is an unusual choice of beta and threshold for effect size – basically one additional question correct on their eight-question survey. The rest of their power calculation, however, makes sense “… and 90% power to detect a 10% difference in the proportion of patients admitted to an observation unit for cardiac testing.” It appears the trial was not conducted to test their primary outcome selected by their patient advocates designing the trial, but in actuality to test the secondary outcomes thought important to the clinicians.

So, it is a little hard to interpret their favorable result with respect to the primary outcome – 3.6 vs 4.2 questions answered correctly. After clinicians spent an extra 1.3 minutes (4.4 vs 3.1) with patients showing them a visual aid specific to their condition, I am not surprised patients had better comprehension of their treatment options – and they probably did not require a multi-center trial to prove this.

Then, the crossover between resource utilization and shared decision-making seems potentially troublesome. An idealized version of shared decision-making allows patients to participate in their treatment when there is substantial individual variation between the perceived value of different risks, benefits, and alternatives. However, I am not certain these patients are being invited to share in a decision between choices of equal value – and the authors seem to express this through their presentation of the results.

These are all patients without known coronary disease, normal EKGs, a negative initial cardiac troponin, and considered by treating clinicians to otherwise fall into a “low risk” population. This is a population matching the cohort of interest from Weinstock’s study of patients hospitalized for observation from the Emergency Department, 7,266 patients of whom none independently suffered a cardiac event while hospitalized.  A trial in British Columbia deferred admission for a cohort of patients in favor of outpatient stress tests.  By placing a fair bit of emphasis on their significant secondary finding of a reduction in observation admission from 52% to 37%, the authors seems to indicate their underlying bias is consistent with the evidence demonstrating the safety of outpatient disposition in this cohort.  In short, it seems to me the authors are not using their decision aid to help patients choose between equally valued clinical pathways, but rather to try and convince more patients to choose to be discharged.

In a sense, it represents offering patients a menu of options where overtreatment is one of them.  If a dyspneic patient meets PERC, we don’t offer them a visual aid where a CTPA is an option – and that shouldn’t be our expectation here, either.  These authors have put in tremendous effort over many years to integrate many important tools, but it feels like the end result is a demonstration of a shared decision-making instrument intended to nudge patients into choosing the disposition we think they ought, but are somehow afraid to outright tell them.

“Shared decision making in patients with low risk chest pain: prospective randomized pragmatic trial”
http://www.bmj.com/content/355/bmj.i6165.short

Another Expensive “Miracle”

Coronary artery disease – one of many self-inflicted wounds of Western society – fuels some of the largest pharmaceutical and device blockbusters of our time. Statins, stents, and the entire organization of our health system around STEMI care are all linked to coronary disease.

This JAMA article and its breathless lay coverage focus on a clinical trial for evolocumab (Repatha), one of the new proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors. This trial, featuring evolocumab added to a statin versus a statin alone, evaluated this therapy using one of the most surrogate of surrogate markers: nominal change in percent coronary atheroma volume at 78 weeks.

As the press releases indicate, this trial was a massive success – the $14,000-per-dose PCSK9 inhibitor was positive for its primary endpoint. Patients taking just a statin continued to have excellent LDL levels and their coronary atheroma volume, as measured by intravascular ultrasound, was essentially unchanged. The evolocumab cohort, however, had even better LDL levels and … coronary atheroma volume was essentially unchanged. But, the difference between +0.05% and -0.95% is statistically significant, and therefore, the trial was a success.

There were, of course, in this trial with only 968 patients, no signals of clinically relevant benefit nor obvious reliable harm. Considering the fierce debate regarding whether statins are already overprescribed, despite being ubiquitously inexpensive, I do not see any reason to look forward to this $14,000 drug entering more widespread use.

“Effect of Evolocumab on Progression of Coronary Disease in Statin-Treated Patients: The GLAGOV Randomized Clinical Trial”
http://jamanetwork.com/journals/jama/fullarticle/2584184

The Machine Can Learn

A couple weeks ago I covered computerized diagnosis via symptom checkers, noting their imperfect accuracy – and grossly underperforming crowd-sourced physician knowledge. However, one area that continues to progress is the use of machine learning for outcomes prediction.

This paper describes advances in the use of “big data” for prediction of 30-day and 180-day readmissions for heart failure. The authors used an existing data set from the Telemonitoring to Improve Heart Failure Outcomes trial as substrate, and then applied several unsupervised statistical models to the data with varying inputs.

There were 236 variables available in the data set for use in prediction, weighted and cleaned to account for missing data. Compared with the C statistic from logistic regression as their baseline comparator, the winner was pretty clearly Random Forests. With a baseline 30-day readmission rate of 17.1% and 180-day readmission of 48.9%, the C statistic for the logistic regression model predicting 30-day readmission was 0.533 – basically no predictive skill. The Random Forest model, however, achieved a C statistic of 0.628 by training on the 180-day data set.

So, it’s reasonable to suggest there are complex and heterogenous data for which machine learning methods are superior to traditional models. These are, unfortunately, pretty terrible C statistics, and almost certainly of very limited use for informing clinical care. As with most decision-support algorithms, I would be curious also to see a comparison with a hypothetical C statistic for clinician gestalt. However, for some clinical problems with a wide variety of influential factors, these sorts of models will likely become increasingly prevalent.

“Analysis of Machine Learning Techniques for Heart Failure Readmissions”
http://circoutcomes.ahajournals.org/content/early/2016/11/08/CIRCOUTCOMES.116.003039

All Glory to the Triple-Rule-Out

The conclusions of this study are either ludicrous, or rather significant; the authors are either daft, or prescient. It depends fundamentally on your position regarding the utility of CT coronary angiograms.

This article describes a retrospective review of all the “Triple-Rule-Out” angiograms performed at a single center, Thomas Jefferson University Hospital, between 2006 and 2015. There were no specific circumstances under which the TRO were performed, but, grossly, the intended population were those who were otherwise being evaluated for an acute coronary syndrome but “was suspected of having additional noncoronary causes of chest pain”.

This “ACS-but-maybe-not” cohort totaled 1,192 patients over their 10 year study period. There were 970 (81.4%) with normal coronary arteries and no significant alternative diagnosis identified. The remaining, apparently to these authors, had “either a coronary or noncoronary diagnosis that could explain their presentation”, including 139 (11.7%) with moderate or severe coronary artery disease. In a mostly low-risk, troponin-negative population, it may be a stretch to attribute their symptoms to the coronary artery disease – but I digress.

The non-coronary diagnoses, the 106 (8.6%) with other findings, range from “important” to “not at all”. There were, at least, a handful of aortic dissections and pulmonary emboli picked up – though we can debate the likelihood of true positives based on pretest odds. However, these authors also credit the TRO with a range of sporadic findings as diverse as endocarditis, to diastasis of the sternum, and 24 cases of “aortic aneurysm” which were deemed important mostly because there were no priors for comparison.

The authors finally then promote TRO scans based on these noncoronary findings – stating that, if a traditional CTCA were performed, many of these diagnosis would likely be missed. Thus, the paradox. If you are already descending the circles of hell, and are using CTCA in the Emergency Department – then, yes, it is reasonable to suggest the TRO is a valid extension of the CTCA. Then again, if CTCA in the acute setting is already outside the scope of practice, and TRO is an abomination – carry on as if this study never existed.

“Diagnostic Yield of Triple-Rule-Out CT in an Emergency Setting”
http://www.ncbi.nlm.nih.gov/pubmed/27186867