Facebook, Savior of Healthcare

This is just a short little letter I found published in The Lancet.  Apparently, the Taiwan Society of Emergency Medicine has been wrangling with the Department of Health regarding appropriate solutions to the national problem of ED overcrowding.  To make their short story even shorter, apparently, they ended up forming a group on Facebook, and then posting their concerns to the Minister of Health’s Facebook page.  This then prompted the Minister of Health to make surprise visits to several EDs, and, in some manner, the Taiwanese feel their social networking has led to a fortuitous response to their public dialogue.

So, slowly but surely, I’m sure all these little blogs will save the world, too.

“Facebook use leads to health-care reform in Taiwan.”
http://www.ncbi.nlm.nih.gov/pubmed/21684378

Electronic Health Records & Patient Safety

Shameless self-promotion, regretfully.

From my other life as a clinical informatician working on patient safety and human factors as it relates to electronic medical records – my commentary on how electronic medical records might be applied to the 2011 JCHAO National Patient Safety Goals was published today in JAMA.

“Application of Electronic Health Records to the Joint Commission’s 2011 National Patient Safety Goals.”
I am also the spotlight author for the current issue, and you can hear my interview at:

Algorithmic Approach To Detect Sepsis Fails

I was asked to blog about this little article – since it lies at the intersection of Emergency Medicine and informatics.

So, that feeling you get when you look at a patient who is obviously ill?  Computers don’t have that yet.  These folks tried to encapsulate that feeling of “sick” vs. “not sick” into the criteria for severe sepsis, which includes SIRS and hypotension.  The hope was that an algorithmic approach that automatically recognized the vital sign and physiologic criteria for SIRS would trigger reminders to clinicians that would spark them to initiate certain quality care processes sooner.
Out of 33,460 patients processed by the system, 398 triggered the system.  Less than half (46%) of those were true positives.  To follow that up, they tried to evaluate their system for sensitivity and specificity by pulling 1 week’s worth of data (1,386 patients) for closer review – and they found the system generated 6 false positives, 7 true positives, and 4 false negatives.  And those numbers speak for themselves.
Looking back at their four quality measures, they all showed a trend towards improvement – unfortunately three of their four quality measures don’t even have a theoretical connection to improved outcomes.  Chest x-ray, blood cultures, and measuring a serum lactate are all clinically relevant in certain situations, but they are all diagnostic and management decisions independent of “quality”.  Antibiotic administration, however, is part of EGDT for sepsis (for what it’s worth), and that trended towards improvement (OR 2.8, CI 0.9 to 8.6).  
But the final killer?  “In approximately half of patients electronically detected, patients had been detected by caregivers earlier”.  So, clinicians were receiving automated pages suggesting they might consider an infectious cause to hypotension, probably while already placing central lines for septic shock.
Great concept – but automated systems just don’t yet have robust, rapid, high-quality inputs like those a clinician gets just by walking in the room.  But, EM physicians in busy departments overlook things – and a well-designed system might in the future help catch some of those misses.
“Prospective Trial of Real-Time Electronic Surveillance to Expedite Early Care of Severe Sepsis.”

Delivering Clinical Evidence

These are a couple interesting commentaries regarding the state of clinical evidence and the difficulty of applying it at the point of care.  One, from the BMJ, worries about the sheer number of studies and trials being generated, and that the data will never be able to be appropriately digested, and we’ll all die slow deaths from information overload.  And, to some extent, this is true – how many of us carry around “peripheral brains” in our pocket?  Before smartphones, it was the Washington Manual or Tarrascon’s, and now we have MedCalc, Epocrates, etc.  And, we desperately try to simplify things so we can wrap our brains around it and integrate it into a daily practice by distilling tens of thousands of heterogenous patients into a single clinical decision instrument like NEXUS, CCT, CHADS2, etc.  While this is better than flailing about in the dark, it’s still repairing a watch with a hammer.  These tools tell us about the average patient in that particular study, and have only limited external validity towards the patient actually sitting in front of us.

Dr. Smith’s BMJ article proposes the “machine”, which is a magical box that knows all and provides seamless patient-specific evidence.  Dr. Davidoff isn’t sure that’s feasible, and, as a stopgap measure, promotes the rise of the informatician or medical librarian, a new role for utilizing the available electronic health databases.  This librarian will be expert in reading medical literature, will be expert in data mining healthcare information systems, and discover the most relevant ways to target quality and guideline improvement initiatives.

They’re both right, in a way.  And we should definitely train and mature the growing discipline of this clinical informatician while we keep working on the magic box….

http://www.ncbi.nlm.nih.gov/pubmed/21558524
http://www.ncbi.nlm.nih.gov/pubmed/21159764

Computerized Resuscitation in Severe Burns

This is a critical care study that showcases an interesting tool developed for ICU resuscitation of severe burns.  The authors make the case that adequate resuscitation for burns, i.e., the Parkland Formula, is necessary – but that patients are frequently over-resuscitated.  Rather than simply settling for the rigid, formulaic crystalloid infusion over the first 24 hours, they developed a computer feedback loop that altered the infusion rates based on urine output.  Think of it as insulin drip protocol or heparin infusion protocol – but instead of glucose or PTT, you’re measuring UOP and adjusting the fluid rate dynamically on an hourly basis.

I like this study because they have a primary outcome – improved adherence to their UOP target – and then secondary outcome variables that matter, mortality, ICU days, ventilator-free days.  While secondary outcomes are hypothesis-generating tools, making a rational leap to connect the association between their UOP adherence and the massive improvement in mortality demonstrated would not be reproachable.

It is not a large study – and the control group had the same % BSA burn, but had significantly more % full thickness burns.  The magnitude of the mortality outcome could certainly be affected by more demographics than they report, so a follow-up is necessary.  However, the premise of a feedback loop offloading cognitive tasks from providers as part of the management of a complex system is almost certainly something we’re going to see more of in medicine.

http://www.ncbi.nlm.nih.gov/pubmed/21532472

News Flash – Better Electronic Medical Records Are Better

In this article, providers are asked to complete a simulated task in their standard EMR – which is Mayo’s LastWord supplemented by Chart+ – vs a “novel” EMR redesigned specifically for a critical care environment with reduced cognitive load and increased visibility for frequently utilized elements and data.  In their bleeding patient scenario, their novel EMR was faster and resulted in fewer errors.  So, thusly, a better EMR design is better.

While it seems intuitively obvious – you still need studies to back up your justification for interface design in electronic medical records.  Their approach in testing is one I’d like to see expanded – and perhaps even implemented as a regulatory standard – evaluation on cognitive load and a certain level of task-based completion testing with error rates at a certain level.  Electronic medical records should be treated like medical devices/medications/equipment that should be rigorously failure tested.  While EMRs are far more complicated instruments, studies such as this one, illustrate that an EMR with interfaces designed for specific work environments to aid in effective and efficient task-completion save time and reduce errors.

The main issue I see with EMR these days is that the stakeholders and motivators behind this initial wave of implementation in financial – systems in place to capture every last level of service provided to a patient in order to increase revenues.  Now, the next generation and movement with EMRs is to look at how they can increase patient safety, particularly in light of threats of non-payment for preventable medical errors.  Again, financial motivation, but at least this financial motivation is going to motivate progress and maturation of medical records as tools to protect patients, not simply to milk them for profits.

http://www.ncbi.nlm.nih.gov/pubmed/21478739