The Internet Knows If You’ll Be Dead

As another Clinical Informatics “window into the future” – a window into the future.

These authors used three years of electronic health record data to derive a predictive Bayesian network for patient status.  Its scope: home, hospitalized, or dead.  There are many simple models for predicting such things, but this one is interesting because it attempts to utilize multiple patient features, vital signs, and laboratory results in a continuously updating algorithm.  Ultimately, their model was capable of predicting outcomes up through one week from the initial hospitalization event.

Some fun tidbits:

  • What mattered most on Day 1?  Neutrophils, Hct, and Lactate.
  • As time goes by, the network thinks knowing whether you’re on the Ward at Day 3 is prognostic.
  • By Day 5, variables like a simple count of the total number of tests received, the presence of cancer, and albumin levels start to gain importance.

Their Bayesian prediction network was best at predicting death, with an average accuracy of 93% and an AUROC of 0.84.  Similarly, the prediction engine was most accurate on Day 1, with an average accuracy for each outcome of 86% and an AUROC of 0.83.  Overall, for the entire week and all three outcomes, the AUROC was 0.82.

What was also quite interesting was the model, while also predicting outcomes during the index hospitalization, also detected readmission events within the time period scope.  The authors provide a few validation examples as demonstrations, and include a patient whose probability of hospitalization was trending upwards at the time of discharge – and subsequently was readmitted.

Minority Report, medicine style.

“Real-time prediction of mortality, readmission, and length of stay using electronic health record data”
http://www.ncbi.nlm.nih.gov/pubmed/26374704

Social Media & Medicine: It’s Great! No, it’s Worthless! Wait, What?

In the pattern of the old “Choose Your Own Adventure” novels, you can be for, or against, the Iran deal – or for, or against, the utility of social media in dissemination of medical knowledge and clinical practice.

In the far corner, the defending champion, the curmudgeons of the old guard, for whom the journals and textbooks hold primacy.  The American Academy of Neurology attempted to determine whether a “social media strategy” for dissemination of new clinical practice guidelines had any effect on patient or physician awareness.  They published new guidelines regarding alternative medicine therapies and multiple sclerosis through their traditional member e-mails and literature.  Then, they posted a podcast; a YouTube video; added Facebook, LinkedIn, and YouTube advertisting; and hosted a Twitter chat with Time magazine and others.  Based on survey responses, they were not able to measure any increased awareness of the guidelines resulting from their social media interventions.

Then, the challenger: Radiopedia.org.  This second study evaluates the online views of three articles concerning incidental thyroid nodules on CT and MRI.  Two of the articles were in the American Journal of Neuroradiology and the American Journal of Roentgenology, and the third was hosted on Radiopedia.org.  The Radiopedia blog – with some cross-polination and promotion by traditional means – received 32,675 page views, compared with 2,421 and 3,064 for the online journal publications, respectively.  This matches the anecdotal experience of many blogging physicians, that their online content exposure far exceeds that of their traditional publications.

What’s my takeaway?  Audience matters, content matters, and execution matters just as much as the medium.  When engaging an audience like those attending or presenting at, say, a conference entitled “Social Media and Critical Care”, digital scholarship may easily exceed the value of traditional vehicles.  Alternatively, for a topic as rockin’ as esoteric neurology guidelines, there might simply be a maximal ceiling of interested parties.

“The Impact of Social Media on Dissemination and Implementation of Clinical Practice Guidelines: A Longitudinal Observational Study.”
http://www.ncbi.nlm.nih.gov/pubmed/26272267

“Using Social Media to Share Your Radiology Research: How Effective Is a Blog Post?”
http://www.ncbi.nlm.nih.gov/pubmed/25959491

Beaten Into Submission By Wrong-Patient Alerts

It’s a classic line: “Doctor, did you mean to order Geodon for room 12?  They’re here for urinary issues.”

And, the rolling of eyes, the harried return to the electronic health record – to cancel an order, re-order on the correct patient, and return to the business at hand.

Unfortunately, the human checks in the system don’t always catch these wrong-patient errors, leading to potential serious harms.  As such, this handful of folks decided to test an intervention intended to reduce wrong-patient orders: a built-in system delay.  For every order, a confirmation screen is generated with contextual patient information.  The innovation in this case, is the alert cannot be dismissed until a 2.5 second timer completes.  The theory being, this extra, mandatory wait time will give the ordering clinician a chance to realize their error and cancel out.

Based on a before-and-after design, and observation of 3,457,342 electronic orders across 5 EDs, implementation of this confirmation screen reduced apparent wrong-patient orders from approximately 2 per 1,000 orders to 1.5 per 1,000.  With an average of 30 order-entry sessions per 12-hour shift in these EDs, this patient verification alert had a measured average impact of a mere 2.1 minutes of time.

Which doesn’t sound like much – until it accumulates across all EDs and patient encounters, and, in just the 4 month study period, this system occupied 562 hours of extra time.  This works out to 70 days of extra physician time in these five EDs.  As Robert Wears then beautifully estimates in his editorial, if this alert were implemented nationwide, it would result in 900,000 additional hours of physician time per year – just staring numbly at an alert to verify the correct patient.

It is fairly clear this demonstration is a suboptimal solution to the problem.  While this alert certainly reduces wrong-patient orders of a measurable magnitude, the number of adverse events avoided is much, much smaller.  However, in the absence of an ideal solution, such alternatives as this tend to take root.  As you imagine and experience the various alerts creeping into the system from every angle, it seems inevitably clear:  we will ultimately spend our entire day just negotiating with the EHR, with zero time remaining for clinical care.

“Intercepting Wrong-Patient Orders in a Computerized Provider Order Entry System”
http://www.ncbi.nlm.nih.gov/pubmed/25534652

“‘Just a Few Seconds of Your Time.’ at Least 130 Million Times a Year”
http://www.ncbi.nlm.nih.gov/pubmed/25724623

Doctor Internet Will Misdiagnose You Now

Technology has insidiously infiltrated all manner of industry.  Many tasks, originally accomplished by humans, have been replaced by computers and robots.  All manner of industrialization is now automated, Deep Blue wins at chess, and Watson wins at Jeopardy!

But, don’t rely on Internet symptom checkers to replace your regular physician.

These authors evaluated 23 different online symptom checkers, ranging from the British National Health Service Symptom Checker to privately owned reference sites such as WebMD, with a variety of underlying methodologies.  The authors fed each symptom checker 45 different standardized patient vignettes, ranging in illness severity from pulmonary embolism to otitis media.  The study evaluated twin goals: are the diagnoses generated accurate?  And, do the tools triage patients to the correct venue for medical care?

Eh.

For symptom checkers providing a diagnosis, the correct diagnosis was provided 34% of the time.  This seems pretty decent – until you go further into the data and note these tools left the correct diagnosis completely off the list another 42% of the time.  Most tools providing triage information performed well at referring emergent cases to high levels of care, with 80% sensitivity.  However, this performance was earned by simply referring the bulk of all cases for emergency evaluation, with 45% of non-emergent and 67% of self-care cases being referred to inappropriate levels of medical care.

Of course, this does not evaluate the performance of these online checkers versus telephone advice lines, or even against primary care physicians given the same limited information.  Before being too quick to tout these results as particularly damning, they should be evaluated in the context of their intended purpose.  Unfortunately, due to their general accessibility and typical over-triage, they are likely driving patients to seek higher levels of care than necessary.

“Evaluation of symptom checkers for self diagnosis and triage: audit study”
http://www.ncbi.nlm.nih.gov/pubmed/26157077

New Text Message: Be a Hero! Go!

This pair of articles from the New England Journal catalogues, happily, the happy endings expected of interventions undertaken to increase early bystander CPR.

The first article simply describes a 21 year review of outcomes in Sweden following out-of-hospital cardiac arrest, measuring by 30-day survival in patients who received bystander CPR prior to EMS arrival, with those who did not.  In this review, 14,869 cases received CPR prior to EMS arrival, with a 30-day survival of 10.5%.  The remaining 15,512 cases did not receive CPR prior to EMS arrival, and survival was 4.0%.  This advantage remained, essentially, after all adjustments.  Thus, as expected, bystander CPR is good.

The second article is the magnificent one, however.  In Stockholm, 5,989 lay volunteers were recruited and trained to perform CPR.  Each of these volunteers also consented to make themselves available by contact on their mobile phone to perform CPR in case of a nearby emergency.  Patients with suspected OHCA were geolocated, along with those enrolled in the study, and randomized into two groups to either contact nearby volunteers, or not.  In the intervention group, 62% received bystander CPR, compared with 48% of the controls.  The magnitude of this difference was statistically significant, but, however, the survival difference of 2.6% (CI -2.1 to 7.8) favoring the intervention was not.

But, I think we can pretty readily agree – if bystander CPR improves survival, and text messages to nearby volunteers improves bystander CPR – it’s a matter of statistical power, not futility of the intervention.  If the cost of recruiting and contacting CPR-capable volunteers is low, it is likely increased neurologically-intact survival is the result.

This a an excellent initiative I hope is copied around the world.

“Early Cardiopulmonary Resuscitation in Out-of-Hospital Cardiac Arrest”
http://www.ncbi.nlm.nih.gov/pubmed/26061835

“Mobile-Phone Dispatch of Laypersons for CPR in Out-of-Hospital Cardiac Arrest”
http://www.ncbi.nlm.nih.gov/pubmed/26061836

EMLitOfNote at SAEM Annual Meeting

The blog will be on hiatus this week – in San Diego!

I’ll be speaking at:
Social Media Boot Camp
May 12, 2015, 1:00 pm – 5:00 pm
with multiple members of the SAEM Social Media Committee

FOAM On The Spot: Integration of Online Resources Into Real-Time Education and Patient Care
May 13, 2015, 1:30 – 2:30 PM
with Anand Swaminathan, Matthew Astin, and Lauren Westafer

From Clicks and Complaints to a Curriculum: Integrating an Essential Informatics Education
May 13, 2015, 2:30 – 3:30 PM
with Nicholas Genes and James McClay

and co-author on an abstract presentation:
Automating an Electronic Pulmonary Embolism Severity Index Tool to Facilitate Computerized Clinical Decision Support
May 14, 2015, 10:30 – 10:45 AM

Hope to see a few of you there between Tuesday and Thursday!

Attempting Decision-Support For tPA

As I’ve wondered many times before – given the theoretical narrow therapeutic window for tPA in stroke, paired with the heterogenous patient substrate and disease process – why do we consent all patients similarly?  Why do we not provide a more individualized risk/benefit prediction?

Part of the answer is derived from money & politics – there’s no profit in carefully selecting patients for an expensive therapy.  Another part of the answer is the reliability of the evidence base.  And, finally, the last part of the answer is the knowledge translation bit – how can physicians be expected to perform complex multivariate risk-stratification and communicate such information to a layperson in the acute setting?

In this paper, these authors describe the development process of an iPad application specifically designed for pictoral display of individualized risk/benefit for tPA administration in acute ischemic stroke.  Based on time from onset to treatment, age, gender, medical history, NIHSS, weight, and blood pressure, manual entry of these variables into the software provides individualized information regarding outcomes given treatment or non-treatment.

Unfortunately, the prediction instrument – S-TIPI – is based on: NINDS, ECASS II, and ATLANTIS.  Thus, as you might expect, in the most commonly used time frame of 0-3 hours, the outcomes essentially approximate NINDS.  The authors mention they used the UK portion of the Safe Implementation of Thrombolysis in Stroke database and the Virtual International Stroke Trials Archive to refine their calculations, but do not delve into a discussion of predictive accuracy.  Of note, a previous article describing recalibration of S-TIPI indicated an AUC for prediction of only 0.754 to 0.766 – but no such uncertainty, nor their narrowly derived limited data set, are described in this paper.

Regardless, such “precision medicine” decision instruments – for both this and other applications – are of great importance in guiding complex decision-making.  This paper is basically a “check out what we made” piece of literature by a group of authors who will sell you the end result as a product, but it is still an important effort from which to recognize and build.

“Development of a computerised decision aid for thrombolysis in acute stroke care”
http://www.ncbi.nlm.nih.gov/pubmed/25889696

A Window Into Your EHR Sepsis Alert

Hospitals are generally interested in detecting and treating sepsis.  As a result of multiple quality measures, however, now they are deeply in love with detecting and treating sepsis.  And this means: yet another alert in your electronic health record.

One of these alerts, created by the Cerner Corporation, is described in a recent publication in the American Journal of Medical Quality.  Their cloud-based system analyzes patient data in real-time as it enters the EHR and matches the data against the SIRS criteria.  Based on 6200 hospitalizations retrospectively reviewed, the alert fired for 817 (13%) of patients.  Of these, 622 (76%) were either superfluous or erroneous, with the alert occurring either after the clinician had ordered antibiotics or in patients for whom no infection was suspected or treated.  Of the remaining alerts occurring prior to action to treat or diagnose infection, most (89%) occurred in the Emergency Department, and a substantial number (34%) were erroneous.

Therefore, based on the authors’ presented data, 126 of 817 (15%) of SIRS alerts provided accurate, potentially valuable information.  Unfortunately, another 80 patients in the hospitalized cohort received discharge diagnoses of sepsis despite never triggering the tool – meaning false negatives approach nearly 2/3rds the number of potentially useful true positives.  And, finally, these data only describe patients requiring hospitalization – i.e., not including those discharged from the Emergency Department.  We can only speculate regarding the number of alerts triggered on the diverse ED population not requiring hospitalization – every asthmatic, minor trauma, pancreatitis, etc.

The lead author proudly concludes their tool is “an effective approach toward early recognition of sepsis in a hospital setting.”  Of course, the author, employed by Cerner, also declares he has no potential conflicts of interest regarding the publication in question.

So, if the definition of “effective” is lower than probably 10% utility, that is the performance you’re looking it with these SIRS-based tools.  Considering, on one hand, the alert fatigue, and on the other hand, the number of additional interventions and unnecessary tests these sorts of alerts bludgeon physicians into – such unsophisticated SIRS alerts are almost certainly more harm than good.

“Clinical Decision Support for Early Recognition of Sepsis”
http://www.ncbi.nlm.nih.gov/pubmed/25385815

Hi Ur Pt Has AKI For Totes

Do you enjoy receiving pop-up alerts from your electronic health record?  Have you instinctively memorized the fastest series of clicks to “Ignore”?  “Not Clinically Significant”?  “Benefit Outweighs Risk”?  “Not Sepsis”?

How would you like your EHR to call you at home with more of the same?

Acute kidney injury, to be certain, is associated with poorer outcomes in the hospital – mortality, dialysis-dependence, and other morbidities.  Therefore, it makes sense – if an automated monitoring system can easily detect changes and trends, why not alert clinicians to such changes, and nephrotoxic therapies could be avoided.

Interestingly – for both good and bad – the outcomes measured were patient-oriented, randomizing 2393 patients to either “usual care” or text message alerts for changes in serum creatinine.  The goal, overall, was detection of reductions in death, dialysis, or progressive AKI.  While patient-oriented outcomes are, after all, the most important outcomes in medicine – it’s only plausible to improve outcomes if clinicians improve care.  Therefore, measuring the most direct consequence of the intervention might be a better outcome – renal-protective changes in clinician behavior.

Because, unfortunately, despite sending text messages and e-mails directly to responsible clinicians and pharmacists – the only notable change in behavior between the “alert” group and “usual care group” was increased monitoring of serum creatinine.  Chart documentation of AKI, avoidance of intravenous contrast, avoidance of NSAIDs, and other renal-protective behaviors were unchanged, excepting a non-significant trend towards decreased aminoglycoside use.

No change in behavior, no change in outcomes.  Text messages and e-mails alerts!  Can shock collars be far behind?

“Automated, electronic alerts for acute kidney injury: a single-blind, parallel-group, randomised controlled trial”
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)60266-5/fulltext

Social Media in Medicine – Useless!

Or, might it be how you use it that matters?

This is a brief report from the journal Circulation, regarding a self-assessment of their social media strategy.  The editors of the journal performed a prospective, block-randomization of published articles to either social media promotion on Facebook and Twitter, or no promotion, and compared 30-day website page views for each article.  121 articles were randomized to social media and 122 to control, and were generally evenly balanced between article types.

And, the answer – unfortunately, for their 3-person associate editor team – is: no difference.  Articles posted to social media received an average of 409 pageviews within 30-days, compared with 392 to those with no promotion.  Thus, the journal of Circulation declares social media dead – and ultimately generalizes their failures to all cardiovascular journals via their Conclusions section.

So, we should all stop blogging and tweeting?  Or, is journal self-promotion futile?  And, are page views the best measure of the effectiveness of knowledge translation?  Or, is there more nuance and heterogeneity between online strategies, rendering this Circulation data of only passing curiosity?  I tend to believe the latter – but, certainly, it’s an interesting publication I hope inspires other journals to perform their own, similarly rigorous studies.

[Note: if my blog entries receive as many (or more!) pageviews as Circulation articles, does this mean my impact factor is higher than Circulation’s 14.98?]

“A Randomized Trial of Social Media from Circulation”
http://circ.ahajournals.org/content/early/2014/11/17/CIRCULATIONAHA.114.013509.abstract