Heart Failure, Informatics, and The Future

Studies like these are a window into the future of medicine – electronic health records beget clinician decision-support tools that allow highly complex risk-stratification tools to guide clinical practice.  Tools like NEXUS will wither on the vine as oversimplifications of complex clinical decisions – oversimplifications that were needed in a pre-EHR era where decision instruments needed to be memorized.

This study is a prospective observational validation of the “Acute Heart Failure Index” rule – derived in Pittsburgh, applied at Columbia.  The AHFI branch points for risk stratification are…best described below, in this extraordinarily complex flow diagram:

Essentially, the research assistants in the ED applied an electronic version of this tool to all patients given by the Emergency Physician a diagnosis of decompensated heart failure – and then followed them for the primary outcome(s) of death or readmission within 30 days.  In the end, in their small sample size, they find 10% of their low-risk population meets the combined endpoint, while 30.2% of their high-risk population meets their combined endpoint.  Neither group had a very high mortality – most of the difference between groups comes from re-admissions within 30 days.

So, what makes this study important isn’t the AHFI, or that it is reasonable to suggest further research might validate this rule as an aid to clinical decision-making – it’s the progression forwards of using CDS in EHR to synthesize complex medical data into potentially meaningful clinical guidance.

“Validating the acute heart failure index for patients presenting to the emergency department with decompensated heart failure”
http://www.ncbi.nlm.nih.gov/pubmed/22158534

Cardiology Corner – More Brugada Tidbits

Most physicians are aware of the Brugada Syndrome cardiac repolarization phenotype – the most recognizable being Type 1, or “coved” type.

Type 2 and Type 3, however, are essentially indistinguishable from an incomplete right bundle branch block with ST-segment elevation and a positive T-wave.  These authors, based on a small case series, took 38 patients referred for ajmaline provocation testing and compared their baseline ECGs.  Of the 14 patients who converted to Type 1 following ajmaline infusion, they found the baseline angle of the R’ wave differed significantly – with an alpha angle cut-off of 50 degrees and a beta angle cut-off of 58 degrees.

A little esoteric, but fascinating.

“New Electrocardiographic Criteria for Discriminating Between Brugada Types 2 and 3 Patterns and Incomplete Right Bundle Branch Block”
http://www.ncbi.nlm.nih.gov/pubmed/22093505

Yet Another Highly Sensitive Troponin – In JAMA

…peddling the same tired phenomenon of magical thinking regarding the diagnostic miracle of highly sensitive troponins.  However, this one is different because it’s been picked up by the AP, CBS News, Forbes, etc. saying: “Doctors are buzzing over a new blood test that might rule out a heart attack earlier than ever before” and other such insanity.  Yes, our hearts are in atrial flutter around the water cooler about a new assay that changes sensitivity from 79.4% to 82.3% at hour 0 and 94.0% to 98.2% at hour 3.


Unless you actually read the article.

Somehow, contrary to every other high-sensitivity troponin study, this particular highly-sensitive troponin had increased specificity as well – which simply doesn’t make sense.  If you’re testing for the presence of the exact same myocardial strain/necrosis byproduct as a conventional assay, it is absolutely inevitable that you will detect a greater number of >99th percentile values in situations not reflective of acute coronary syndrome.  The only way to increase both sensitivity and specificity is to measure something entirely different.


Or, if it suits your study aims, you can manipulate the outcomes on the back end.  In this study, the final diagnosis of ACS “was adjudicated by 2 independent cardiologists” whose diagnostic acumen is enhanced by financial support including Brahms AG, Abbott Diagnostics, St Jude Medical, Actavis, Terumo, AstraZeneca, Novartis, Sanofi-Aventis, Roche Diagnostics, and Siemens.

I am additionally not impressed by their results reporting – sensitivity and specificity, followed by the irrelevant positive predictive and negative predictive values.  Since the PPV and NPV are determined by the incidence of disease in their cohort, they’re giving us numbers that are potentially not externally valid.  Rather, they should be reporting positive and negative likelihood or odds ratios – which are relatively cognitively unwieldy, but at least not misleading, but conceptually facile, like PPV and NPV.

And this is from JAMA.  Oi.

“Serial Changes in Highly Sensitive Troponin I Assay and Early Diagnosis of Myocardial Infarction”

How Frequently Is The Cath Lab Cancelled?

In North Carolina – a fair bit, actually.

This is a 14-hospital registry of cardiac catheterization activations for which the authors retrospectively evaluated how many were subsequently cancelled after activation.  They don’t delve into a great deal of detail regarding specific findings that accounted for the cancellation – they simply observe the broad categories of cancellation.

Of all cath lab activations, it was judged that 15% were “inappropriate”, with the gold standard being the consulting cardiologist opinion.  Of the cancellations, 40% were based on the EMS ECG, 31% were ED ECG, and the remainder were “not cath lab candidates”.  The author’s main focus in their conclusion is on the difference between EMS ECG cancellation and ED ECG cancellation due to ECG reinterpretation following activation.

What’s more interesting from the paper, however, is when they break it down to the precise cohorts of activation and arrival – and note that 24.7% of EMS activations were subsequently judged inappropriate.  It is also interesting that 13% of non-PCI center activations were inappropriate vs 8% of PCI center activations.  Reading between the lines, there’s probably some experiential component to the differences in activation rates, but this study doesn’t specifically look at volume and training.

“Rates of Cardiac Catheterization Cancelation for ST Elevation Myocardial Infarction after Activation by Emergency Medical Services or Emergency Physicians: Results from the North Carolina Catheterization Laboratory Activation Registry (CLAR)”
http://www.ncbi.nlm.nih.gov/pubmed/22147904

High-Sensitivity Troponin Dead End

Another article trying to work the unworkable – the balance between sensitivity and specificity.

From New Zealand, an attempt to evaluate the Roche Laboratories hsTnT assay in the interests of performing accelerated rule outs in the ED – looking at any combination of initial value, 2-hour value, delta between 0-2 hour value, etc.  And, essentially, any strategy you choose is wrong.
On one hand, you can get up to 91.4% specific for their gold  standard of AMI by requiring a hsTnT  >14 ng/L and a 20% delta change at 2 hours – but your sensitivity will drop to 72%.  Conversely, you can have sensitivity of 98.8% – which is the point of these hsTnT testing strategies – but your specificity drops to 56.4%.  Unless you’re doing something intelligent with all those false positives that isn’t harmful, expensive, or invasive, the costs of zero-miss are, once again, too high.
“High-sensitivity troponin T for early rule-out of myocardial infarction in recent onset chest pain”

It’s Another Chest Pain Prediction Rule!

Yet again, the insanity of the race to a zero-miss culture funds another chest pain discharge prediction rule.  In fact, the most telling part of this paper is in the very end when they compare the chest pain admission rates of the Canadian hospitals in this article to the U.S. hospital – 18% and 20% in Canada compared to 96% in the U.S. (combined ED observation status and inpatient).  The difference in those numbers is insane – and I’m sure people could easily debate which is the preferred side of those numbers to be on.

In any event, the study is a prospective, observational data-gathering study of 64 variables related to the presentation of chest pain – some of which are objective and some of which are historical.  It’s an interesting read – in part because the inter-observer kappa for a lot of the historical variables is so terrible they weren’t even usable.  After collecting all their data, they did 30-day telephone follow-up or vital records review to evaluate the combined endpoint of death, myocardial infarction, or revascularization.

Via the magic of recursive partitioning, a patient without new EKG changes, a negative initial troponin, no history of CAD, atypical pain, and age less than 40 years separated out 7.1% of their study population that had zero 30-day outcomes.  Adding a second negative troponin six hours later for the 41-50 year group gives another 11.2% of patients that had zero outcomes.  So, a facility that admits 96% of their patients could potentially reduce admissions – but it might have less utility in Canada.

I’d rather see a two-hour second troponin than a six-hour one; it might reduce sensitivity, but it’s wholly impractical to tie up a bed in the ED for 6 hours for a patient you want to send home.  And, like most of these articles, the combined endpoint of death, MI, and revascularization is irritating.  Considering there were twice as many revascularizations as myocardial infarctions, there really ought to be more granularity in these sorts of studies with regard to the actual coronary lesions identified rather than simply lumping them into a combined endpoint.

“Development of a Clinical Prediction Rule for 30-Day Cardiac Events in Emergency Department Patients With Chest Pain and Possible Acute Coronary Syndrome”
www.ncbi.nlm.nih.gov/pubmed/21885156

We Overestimate CAD Pretest Probability

The ACC/AHA clinical practice guidelines have a set of reference values for the pretest probability of >50% stenotic coronary artery disease based on the type of pain and age.  These values range from 2% in a 30 year old woman with non-anginal pain to 94% in a 60 year old man with typical angina.

And, turns out, this is way off.

This is a CTCA registry study of patients undergoing coronary angiography, 14,048 consecutive patients with suspected CAD, looking at both the incidence of 50% luminal narrowing (clinically interesting) and the incidence of 70% luminal narrowing (potentially flow-limiting), and correlating it to asymptomatic, non-anginal, atypical angina, typical angina, or “dyspnea only”.

The meaningful tables of results somewhat defy summarization, but, they have plenty of hypertensives with dyslipidemia – but not very many diabetics or smokers – in their cohort.  In the end, however, none of the observed CAD was anywhere close to the predicted pretest probabilities.  The cohort with the highest prevalence of CAD was the typical angina in age 70+ males – but even that led to only 53% having a 50% lesion.  More than anything, age and gender the most significant predictors of CAD – with no population of women having greater than 29% incidence.

It’s an interesting table worth looking at – CAD really doesn’t kick in until after age 40, and, even then, only mostly in men, and, even then, only in patients with typical symptoms.  Once you hit age 50 in men, however, there’s CAD everywhere, even with atypical (or no) symptoms.

There was also some variability by study site – with the 2,225 from Korea having very little CAD and the 29 from the Swiss site having markedly more, but the remainder are relatively similar.

I love studies that just present reams of data and don’t try to push any particular sponsored agenda.

“Performance of the Traditional Age, Sex, and Angina Typicality–Based Approach for Estimating Pretest Probability of Angiographically Significant Coronary Artery Disease in Patients Undergoing Coronary Computed Tomographic Angiography”
http://www.ncbi.nlm.nih.gov/pubmed/22025600


Prolonged QT – Don’t Believe The Hype?

Much ado is made about the risk of QT prolongation and the development of malignant arrhythmias, particularly Torsades de Pointes – but how frequently does TdP actually occur in these patients who QT prolongation?  Should we be worried about every EKG that crosses our paths with a prolonged QT?

It seems, like so many things, the answer is yes and no.  This is a prospective observational study from a single institution that installed cardiac monitoring that enabled minute-by-minute measurement and recording of QT intervals in their monitored inpatient population.  They evaluated 1,039 inpatients for 67,648 hours worth of time, and found these patients spent 24% of their monitored time with a prolonged QTc (>500ms).  One single patient had a cardiac arrest event where TdP was evident on the monitoring strip – a comorbidly ill heart failure patient whose QTc ranged as high as 691ms.

The authors then went back to attempt to determine whether the prolonged QT was associated with all-cause mortality with the 41 patients who died during their study period, and they found that 8.7% had QT prolongation versus 2.6% who did not.  However, as you can imagine, there are massive baseline differences between the QT prolonged population and the non-QT prolonged population, many of which contribute greater effects to in-hospital all-cause mortality.  The authors attempt logistic regression and finally come up with an OR of 2.99 for QT prolongation for all-cause mortality – which is lower in effects than CVA, obesity, pro-arrhythmic drug administration, and high serum BUN.

It’s reasonable to say that patients with a prolonged QT are at higher risk for death – but it’s also reasonable to say that sick patients at a higher risk of death are more likely to have a prolonged QT.  Torsades was rare, even with the thousands of hours of QT prolongation noted.  I would not get over-excited about QT prolongation in isolation, but, rather, only in the context of multiple risk factors for mortality in acute illness.

“High prevalence of corrected QT interval prolongation in acutely ill patients is associated with mortality: Results of the QT in Practice (QTIP) Study”
http://www.ncbi.nlm.nih.gov/pubmed/22001585

Novel Ischemia Prediction from CCTA

One of the arguments against CCTA is that it only describes coronary anatomy – and has no demonstrated clinical predictive value regarding whether the observed lesions are flow-limiting or potentially related to anginal symptoms.  This study develops a computational fluid dynamics model that attempts to predict flow through coronary stenoses seen on CCTA.

Korea, Latvia, and California come together to evaluate 103 patients in a multicenter trial in which patients with suspected CAD underwent CCTA, invasive coronary angiography, and fractional flow reserve measurement.  They used only 256 and 64-slice scanners for CCTA, and CAD was quantified as none, mild (0-49%), moderate (50-70%), and severe (>70%).  Patients then underwent invasive coronary angiography where ischemia-related flow-limitation was defined as a fractional flow reserve of < 0.80.  The study group then developed a method of deriving the FFR from CCTA data, and compared it to the actual measurements from invasive coronary angiography using the same threshold value.

The conclusions from this article depend what takeaways you’re looking for.  On one hand, the FFR-CT method was pretty decent – 87.9% sensitive and 82.2% specific regarding their definition of ischemia-causing lesions.  The other real takeaway is that CCTA has abysmal performance at the threshold typically used in the CCTA studies of >50% stenosis.  Their calculated +LR for CCTA stenoses >50% was only 1.51 in the setting of a specificity of 39.6%.  To me, another nail in the coffin showing CCTA is the d-Dimer of CAD, leading to a ton of unnecessary testing.

Considering it took them 5(!) hours to generate the FFR-CT measurement based on Newtonian fluid and Navier-Stokes equations on a parallel supercomputer, I don’t think we’ll be seeing this anytime soon – but hope is out there for the future.

“Cardiac Imaging Diagnosis of Ischemia-Causing Coronary Stenoses by Noninvasive Fractional Flow Reserve Computed From Coronary Computed Tomographic Angiograms”
http://www.theheart.org/article/1299631.do

Yes, Let MONA Fade Away

These authors make a brief argument regarding the inappropriateness of the commonly taught acronym of “MONA” for the initial treatment of acute coronary syndrome.  It is probably the case that well-read Emergency Physicians have since moved on, but it bears repeating.

 – Morphine, which has been associated with worsened outcomes in CRUSADE, but the results are confounded by other factors.  Narcotics are still probably reasonable for nitrate-resistant pain.
 – Oxygen, in which hyperoxia is associated with coronary vasoconstriction, exacerbates reperfusion injury and infarct size.  It is currently recommended that oxygen only be used for patients who are hypoxic.
 – Nitrates, suitable for the relief of anginal symptoms in selected patients.
 – Aspirin, the only element of MONA proven to be strongly beneficial.

And, presumably, future trials will involve the use of newer anti-platelet and other agents in the inital treatment of ACS.

The market is ripe for a replacement acronym!

“Initial treatment of acute coronary syndromes.  Is there a future for MONA acronym after the 2010 guidelines?”
http://www.ncbi.nlm.nih.gov/pubmed/21982924