Shameless self-promotion, regretfully.
Category: Informatics
Algorithmic Approach To Detect Sepsis Fails
I was asked to blog about this little article – since it lies at the intersection of Emergency Medicine and informatics.
Delivering Clinical Evidence
These are a couple interesting commentaries regarding the state of clinical evidence and the difficulty of applying it at the point of care. One, from the BMJ, worries about the sheer number of studies and trials being generated, and that the data will never be able to be appropriately digested, and we’ll all die slow deaths from information overload. And, to some extent, this is true – how many of us carry around “peripheral brains” in our pocket? Before smartphones, it was the Washington Manual or Tarrascon’s, and now we have MedCalc, Epocrates, etc. And, we desperately try to simplify things so we can wrap our brains around it and integrate it into a daily practice by distilling tens of thousands of heterogenous patients into a single clinical decision instrument like NEXUS, CCT, CHADS2, etc. While this is better than flailing about in the dark, it’s still repairing a watch with a hammer. These tools tell us about the average patient in that particular study, and have only limited external validity towards the patient actually sitting in front of us.
Dr. Smith’s BMJ article proposes the “machine”, which is a magical box that knows all and provides seamless patient-specific evidence. Dr. Davidoff isn’t sure that’s feasible, and, as a stopgap measure, promotes the rise of the informatician or medical librarian, a new role for utilizing the available electronic health databases. This librarian will be expert in reading medical literature, will be expert in data mining healthcare information systems, and discover the most relevant ways to target quality and guideline improvement initiatives.
They’re both right, in a way. And we should definitely train and mature the growing discipline of this clinical informatician while we keep working on the magic box….
http://www.ncbi.nlm.nih.gov/pubmed/21558524
http://www.ncbi.nlm.nih.gov/pubmed/21159764
Computerized Resuscitation in Severe Burns
This is a critical care study that showcases an interesting tool developed for ICU resuscitation of severe burns. The authors make the case that adequate resuscitation for burns, i.e., the Parkland Formula, is necessary – but that patients are frequently over-resuscitated. Rather than simply settling for the rigid, formulaic crystalloid infusion over the first 24 hours, they developed a computer feedback loop that altered the infusion rates based on urine output. Think of it as insulin drip protocol or heparin infusion protocol – but instead of glucose or PTT, you’re measuring UOP and adjusting the fluid rate dynamically on an hourly basis.
I like this study because they have a primary outcome – improved adherence to their UOP target – and then secondary outcome variables that matter, mortality, ICU days, ventilator-free days. While secondary outcomes are hypothesis-generating tools, making a rational leap to connect the association between their UOP adherence and the massive improvement in mortality demonstrated would not be reproachable.
It is not a large study – and the control group had the same % BSA burn, but had significantly more % full thickness burns. The magnitude of the mortality outcome could certainly be affected by more demographics than they report, so a follow-up is necessary. However, the premise of a feedback loop offloading cognitive tasks from providers as part of the management of a complex system is almost certainly something we’re going to see more of in medicine.
News Flash – Better Electronic Medical Records Are Better
In this article, providers are asked to complete a simulated task in their standard EMR – which is Mayo’s LastWord supplemented by Chart+ – vs a “novel” EMR redesigned specifically for a critical care environment with reduced cognitive load and increased visibility for frequently utilized elements and data. In their bleeding patient scenario, their novel EMR was faster and resulted in fewer errors. So, thusly, a better EMR design is better.
While it seems intuitively obvious – you still need studies to back up your justification for interface design in electronic medical records. Their approach in testing is one I’d like to see expanded – and perhaps even implemented as a regulatory standard – evaluation on cognitive load and a certain level of task-based completion testing with error rates at a certain level. Electronic medical records should be treated like medical devices/medications/equipment that should be rigorously failure tested. While EMRs are far more complicated instruments, studies such as this one, illustrate that an EMR with interfaces designed for specific work environments to aid in effective and efficient task-completion save time and reduce errors.
The main issue I see with EMR these days is that the stakeholders and motivators behind this initial wave of implementation in financial – systems in place to capture every last level of service provided to a patient in order to increase revenues. Now, the next generation and movement with EMRs is to look at how they can increase patient safety, particularly in light of threats of non-payment for preventable medical errors. Again, financial motivation, but at least this financial motivation is going to motivate progress and maturation of medical records as tools to protect patients, not simply to milk them for profits.
