The Unusable Manchester Chest Pain Instrument

With probably underpowered derivation and validation, a model that seems to overfit the data, and incorporating an impractical and questionable cardiac biomarker – despite a lovely continuous predictive function – this instrument is doomed in its current form.

This is the Manchester Acute Coronary Syndromes (MACS), a prospectively derived and validated risk-stratification instrument.  These authors identify an 8 variable decision instrument based on 698 patients at Manchester Infirmary – including hsTnT, heart-type fatty-acid binding protein, ECG changes, diaphoresis, vomiting, radiation to right shoulder, worsening angina, and hypotension – and then validate it on 463 patients from Stepping Hill Hospital.  In the validation, 27.0% of patients were ultimately classified as “very low risk” with 98% sensitivity (95% CI 93.0% to 99.8%) for 30-day MACE, and the authors feel this tool could reduce unnecessary admissions.

My favorite feature from this study is the derivation of a continuous function for prediction of 30-day outcomes.  The authors state an AUC of 0.92 for the function predicting MACE, which suggests potential as a useful tool for discussing individualized risks with patients.  Rather than simply dichotomize a “very low risk” cohort, the predictive function could help aid shared decision-making conversations with patients.

However, the utilization of fatty-acid binding protein is questionable.  These same authors presented work favoring H-FABP with an AUC for diagnosis of AMI of 0.86, but compared it against a troponin assay with an AUC of 0.70.  A response to that same article notes the authors probably made inappropriate comparisons, and modern conventional troponin assays and/or high-sensitivity troponin assays have AUCs >0.90.  It’s not clear what, or how much, additional value this biomarker adds to this study – and its inclusion essentially obviates the generalizability of the tool.  No rapid, automated H-FABP assay is available suitable for use in an ED context.  It is also unfortunate the corresponding author declares conflict-of-interest with the manufacturers of the assays used.

Interestingly, as well, the authors focus only on the “very low risk” group when an odd thing happens in their validation population – the “low risk” group actually had fewer MACE than the “very low risk” group (1.2% vs. 1.6%).  This is a substantial reversal of the derivation population (5.8% vs. 0.4%), suggesting the attempted validation reveals their model may be overfitting the data.  The authors state the second site validation is a strength regarding combating overfitting, but do not mention this inconsistency in the outcomes.

And, finally, I’m not entirely certain what question this study was designed to answer.  The focus on a “very low risk” cohort in the discussion doesn’t entirely match the study design – it seems other studies focused on outcomes in low risk chest pain have specifically excluded patients whose presentation is clearly AMI on initial presentation.  The inclusion of the entire spectrum of disease, while valuable for their general model, dilutes the strength of their “very low risk” conclusions, as evidenced by wide confidence intervals around the sensitivity for MACE.

As the authors note in their discussion, additional work needs to be done to compare their model with other risk-stratification tools.  And, for anyone to use this tool other than the authors, the H-FABP needs to be dropped.  At the least, however, it fits in nicely with another recent critique – that many “low risk” and “very low risk” patients do not require observation and immediate provocative testing, where the false-positives and resource expenditures are simply preposterous.

“The Manchester Acute Coronary Syndromes (MACS) decision rule for suspected cardiac chest pain: derivation and external validation”
http://heart.bmj.com/content/early/2014/04/29/heartjnl-2014-305564

One thought on “The Unusable Manchester Chest Pain Instrument”

  1. Richard Body responds via e-mail – with permission to re-post in its unedited entirety:

    —————————————-

    Hi Ryan,

    Thanks a lot for covering the MACS rule on your site! I follow your posts – the blog is a truly excellent way to keep up to date with the literature. You definitely deserve the excellent reputation that you have. I was really interested to read your appraisal of the MACS rule paper and it’s actually hugely important to have independent opinions like yours. There are a few things I’d like to point out as I think it’ll add to your already awesome appraisal.

    First of all, around the concerns about the rule being unusable, there is now a fully automated assay for H-FABP. This can be mounted on the commercially available modular analysers that current hospital laboratories use, and it has a turnaround time that would permit use in the ED setting. Obviously we need to know how the MACS rule performs with that specific assay now – expect more on that soon!…

    One other question you raised is why H-FABP is even in the decision rule given that it’s not a test we’re currently running routinely. In answer, the rule was derived by multivariate analysis, which showed that H-FABP was independently predictive of MACE with a very high degree of statistical significance – hence its inclusion in the final model. If H-FABP hadn’t provided any additive value, we wouldn’t have found that statistical significance. Will the rule work without H-FABP? Well, not in its present form (you couldn’t simply ignore the H-FABP element as that hasn’t been validated). But this would be a very good question to ask in further research – and one I’m definitely intending to address.

    You also pointed out that the ‘low risk’ group on validation had a lower risk of MACE (1.2%) than the ‘very low risk’ group (1.6%) and asked whether this could mean that the rule is overfitted. With any decision rule, overfitting is a possibility and could reduce diagnostic performance. Reassuringly, the AUC of the MACS rule on validation tells us that overall risk stratification is pretty well preserved, and diagnostic sensitivity (for early discharge) also remains very high. However, for anyone who remains worried about overfitting, we don’t actually recommend immediate clinical implementation of the MACS rule. We recommend further evaluation in a trial setting, which would provide further reassurance about possible overfitting (if the findings were positive).

    Lastly, on the potential conflicts of interest with diagnostic companies, I should point out that I’ve not accepted payments from these manufacturers. The manufacturers donated reagents without charge and without any bar to publication. Thus, they supported the work without influencing the design, conduct, analysis or reporting of the study, whether the findings were positive or negative. I think it’s great that industry is willing to collaborate with us on academic-led research in that way. Industry gets bad press – but they do some good stuff too (like supporting this research, whether it would be positive or negative).

    I hope that clears up some of the very pertinent points you made in your appraisal. Looking forward, as ever, to all your posts at EMlitofnote – keep up the great work!

    Rick

Comments are closed.