It Feels Good To Use an iPad

Recently, there has been a great deal of coverage on internet news sites with headlines such as “Study: iPads Increase Residency Efficency.”  These headlines are pulled from a “Research Letter” in Archives of Internal Medicine, reporting from the University of Chicago, regarding the distribution of iPads capable of running Epic via Citrix.

Sounds good, but it’s untrue.

What is true is that residents reported that they used the iPads for work.  The additionally thought that it saved them time, and thought it improved their efficiency on the wards.  This is to say, they liked using the iPad.

The part that isn’t true is where the authors claim an increase in “actual resident efficiency.”  By analyzing the hour of the day in which orders are placed, the authors attempt to extrapolate to a hypothetical reality in which this means iPads are helping their residents place orders more quickly on admitted patients, and to place additional orders while post-call, just before leaving the hospital.  There is, in fact, no specific data that using the iPad makes the residents more efficient, only data showing the hour of the day in which orders are placed has changed from one year to the next.  The iPad has, perhaps, changed their work habits – but without prospectively observing how these iPads are being used, it is impossible to conclude how or why.

But, at least they liked them!  And, considering how addictive Angry Birds is, I’m surprised their productivity isn’t decreased.

“Impact of Mobile Tablet Computers on Internal Medicine Resident Efficiency”

http://archinte.ama-assn.org/cgi/content/extract/172/5/436

Automagical Problem Lists

This is a nice informatics paper that deals mostly with problem lists.  These are meticulously maintained (in theory) by inpatient and ambulatory physicians to accurately reflect a patient’s current medical issues.  Then, when they arrive in the ED, you do your quick chart biopsy from the EMR, and you can rapidly learn about your patient.  However, these lists are invariably inaccurate – studies show they’ll appropriately be updated with breast cancer 78% of the time, but as low as 4% of the time for renal insufficiency.  This is bad because, supposedly, accurate problem lists lead to higher-quality care – more CHF patients receiving ACE or ARBs if it was on their diagnosis list, etc.

These authors created a natural language processing engine, as well as a set of inference rules based on medications, lab results, and billing codes for 17 diagnoses, and implemented an alert prompt to encourage clinicians to update the problem list as necessary.  Overall, 17,043 alerts were fired during the study period, and clinicians accepted the recommendations of 41% – which could be better, but it’s really quite good for an alert.  As you might expect, the study group with the alerts generated 3 times greater additions to the patient problem lists.  These authors think this is a good thing – although, I have seen some incredible problem list bloat.

What’s interesting is that a follow-up audit of alerts to evaluate their accuracy based on clinical reading of the patient’s chart estimated the alerts were 91% accurate – which means all those ignored alerts were actually mostly correct.  So, there’s clearly still a lot of important work that needs to go into finding better ways to integrate this sort of clinical feedback into the workflow.

So, in theory, better problem lists, better outcomes.  However, updating your wife’s problem list can probably wait until after Valentine’s Day.

“Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial.”
www.ncbi.nlm.nih.gov/pubmed/22215056

Heart Failure, Informatics, and The Future

Studies like these are a window into the future of medicine – electronic health records beget clinician decision-support tools that allow highly complex risk-stratification tools to guide clinical practice.  Tools like NEXUS will wither on the vine as oversimplifications of complex clinical decisions – oversimplifications that were needed in a pre-EHR era where decision instruments needed to be memorized.

This study is a prospective observational validation of the “Acute Heart Failure Index” rule – derived in Pittsburgh, applied at Columbia.  The AHFI branch points for risk stratification are…best described below, in this extraordinarily complex flow diagram:

Essentially, the research assistants in the ED applied an electronic version of this tool to all patients given by the Emergency Physician a diagnosis of decompensated heart failure – and then followed them for the primary outcome(s) of death or readmission within 30 days.  In the end, in their small sample size, they find 10% of their low-risk population meets the combined endpoint, while 30.2% of their high-risk population meets their combined endpoint.  Neither group had a very high mortality – most of the difference between groups comes from re-admissions within 30 days.

So, what makes this study important isn’t the AHFI, or that it is reasonable to suggest further research might validate this rule as an aid to clinical decision-making – it’s the progression forwards of using CDS in EHR to synthesize complex medical data into potentially meaningful clinical guidance.

“Validating the acute heart failure index for patients presenting to the emergency department with decompensated heart failure”
http://www.ncbi.nlm.nih.gov/pubmed/22158534

Your Residents Would Love a Wiki

It’s not a terribly profound paper – along the lines of “we did this and we liked it” sort of thing – but it is a relevant educational application of wikis in medicine.

The BIDMC Internal Medicine department undertook an initiative to essentially convert all their little handbooks and service guides to an online reference.  They chose the wiki interface so anyone could update information or add pages while allowing updates to be tracked and rolled back as necessary.  They promoted it during their intern orientation and made a significant effort both to get people to update it and use it.  And, for the most part, they were successful.  Most residents (92%) thought it was useful, it was mostly used to find phone numbers and rotation specific clinical information, and, overall, about half of the PGY-2 and -3s updated the site during the 2009-10 year.

It probably takes a lot of effort and requires just the right collaborative environment, but there are a lot of residencies, departments, or other clinical organizations that could also probably benefit from something similar – particularly if there are a lot of rotating students/residence between difference services or sites.

“Adoption of a wiki within a large internal medicine residency program: a 3-year experience”
http://www.ncbi.nlm.nih.gov/pubmed/22140210

ED Geriatric CPOE Intervention – Win?

It does seem as though this intervention had a measure of success – based on their primary outcome – but there’s more shades of grey throughout the article.

This is a prospective, controlled trial of a contextual computer decision-support (CDS) incorporated into the computerized provider order entry (CPOE) system of their electronic health record (EHR).  They do a four-phase On/Off intervention where the CPOE either suggests alternative medications or dose reductions in patients >65 years of age.  They look at whether the intervention changed the rate at which medication ordering was compliant with medication safety in the elderly, and then, secondarily, at the rate of 10-fold errors, medication cancellations, and adverse drug event reports.

The oddest part of this study is their choice of primary outcome measure.  Ideally, the most relevant outcome is the patient-oriented outcome – which, in this case, ought to be a specific decrease in adverse drug events in the elderly.  However, and I can understand where they’re coming from, they chose to specifically evaluate the usability/acceptability of the CDS intervention to verify the mechanism of intervention.  There are lots of studies out there documenting “alert fatigue”, resulting in either no change or even increasing error rates.

As far as the main outcome measure goes, they had grossly positive findings – 31% of orders were compliant during the intervention periods vs. 23% of orders during the control periods.  But, 92.5% of recommendations for alternative medications were ignored during the intervention periods – most commonly triggered by diazepam, clonazepam, and indomethacin.  The intervention was successful in reducing doses for NSAIDs and for opiates, but had no significant effect on benzodiazepine or sedative-hypnotic dosing.

However, bizarrely, even though there was just a small difference in guideline-concordant ordering, there was a 4-fold reduction in adverse drug events – most of which occurred during the initial “off” period.  As a secondary outcome, there’s much to say about it other than “huh”.  None of their other secondary outcomes demonstrated any differences.

So, it’s an interesting study.  It is consistent with a lot of previous studies – most alerts are ignored, but occasionally small positive effect sizes are seen.  Their primary outcome measure is one of mostly academic interest – it would be better if they had chosen more clinically relevant outcomes.  But, no doubt, if you’re not already seeing a deluge of CDS alerts, just wait a few more years….

“Guided medication dosing for elderly emergency patients using real-time, computerized decision support”
http://www.ncbi.nlm.nih.gov/pubmed/22052899

Computer Reminders For Pain Scoring Improve Treatment

This is a paper on an important topic – considering the CMS quality measures coming up that will track time to pain medication for long bone fractures – that demonstrates a mandatory computer reminder improved pain treatment more than an educational campaign did.

This is a prospective study of 35,628 patients visiting an Australian emergency department in which they went through several phases of intervention, the most salient in their minds was requiring assessment of a pain score at triage.  They started by simply observing their performance, then they altered their electronic medical record with a mandated input of the pain score at triage.  After the mandated scoring, time to analgesia went from median of 123 minutes to 95 minutes.  After the mandate phase, the ED staff underwent an education program regarding pain management in the ED – and the time to analgesia didn’t improve any further.

So, it is reasonable to infer that mandating the pain score at triage had the desired effect on decreasing time to analgesia.  However, 95 minutes until analgesia is still terrible.  It would be far more interesting of an article if it truly broke down all the times – such as time to triage, time to room, time to physician, time to analgesia order, etc., because there are a lot more data points to gather.

Additionally, it seems it might simply be higher yield if – in addition to asking pain in triage – they had a triage protocol to treat the pain immediately at that point, rather than later downstream.

“Mandatory Pain Scoring at Triage Reduces Time to Analgesia”
www.ncbi.nlm.nih.gov/pubmed/21908072

Health Information Exchanges Might Save Money

Though, from this data, it’s not clear through what mechanism.

This is an retrospective billing database evaluation of Memphis Emergency Department visits between 2007 and 2008.  In Memphis, 12 EDs participate in an online data repository, which may be accessed by secure web connection.  The authors compared patients presenting to the Emergency Department for whom this medical record was accessed to patients for whom this record was not.

There were no baseline differences between the demographics of the study groups, although, this retrospective evaluation cannot account for the factors contributing to why physicians chose to access the information exchange for individual patients.

The results are rather odd.  The authors cite cost savings as a result of an OR of 0.27 for inpatient hospitalization after accessing the information exchange.  However, the frequency of basically every other type of activity stayed flat or increased – in fact, the OR for Head CT was 5.0 and a chest x-ray was 4.3 if information exchange records were accessed.

More tests?  Fewer admissions?  I’m not sure it’s practical to generalize the effects of an information exchange on medical decision making in a retrospective fashion such as this.

“The financial impact of health information exchange on emergency department care.”
www.ncbi.nlm.nih.gov/pubmed/22058169

iPhone Medical Apps To The Rescue

In this study, the author and creator of “PICU Calculator” for iPhone details the superiority of a medical student with a smartphone over an attending using the pharmacy reference book.  A few entertaining tidbits from their main results:
 – Medical students don’t know how a book functions – failed to correctly complete any pediatric dosing task using the British National Formulary for Children.
 – Residents and attendings managed to make the book work for them about half the time.
 – Overall across all levels of training, 35 for 35 in correct dosage and volume using the iPhone app – with a mean time savings of over 5 minutes.

So, when the author of an iPhone app choses a clinical task his app is designed to replace, it works great!  But, the larger point – as we already knew – there is a role for well-designed point-of-care electronic tools, so we shouldn’t give up on our CPOE and EHR kludge so soon.

“Students prescribing emergency drug infusions utilising smartphones outperform consultants using BNFCs.”
www.ncbi.nlm.nih.gov/pubmed/21787737

Physicians Will Test For PE However They Damn Well Please

Another decision-support in the Emergency Department paper.

Basically, in this study, an emergency physician considered the diagnosis of pulmonary embolism – and a computerized intervention forced the calculation of a Wells score to help guide further evaluation.  Clinicians were not bound by the recommendations of the Wells calculator to guide their ordering.  And they sure didn’t.  There were 229 patients in their “post-intervention” group, and 26% of their clinicians said that evidence-based medicine wasn’t for them, and were “non-compliant” with their testing strategy.

So, did the intervention help increase the number of positive CTAs for PE?  Officially, no – their trend from 8.3% positive to 12.7% positive didn’t meet significance.  Testing-guideline complaint CTA positivity was 16.7% in the post-intervention group, which, to them, validated their intervention.

It is interesting that a low-risk Wells + positive d-Dimer or high-risk Wells cohort had only a 16% positive rate on a 64-slice CT scanner – which doesn’t really match up with the original data.  So, I’m not sure exactly what to make of their intervention, testing strategy, or ED cohort.  I think the take home point is supposed to be, if you you can get evidence in front of clinicians, and they do evidence-based things, outcomes will be better – but either this just was too complex a clinical problem to tackle to prove it, or their practice environment isn’t externally valid.

Does EHR Decision Support Make You More Liable?

That’s the question these JAMA article authors asked themselves, and they say – probably.  The way they present it, it’s probably true – using the specific example of drug-drug interactions.  If you put an anticoagulated elderly person on TMP-SMX and they come back a few days later bleeding with an INR of 7, you might be in trouble for clicking away the one important drug alert out of the one hundred you’re inundated on your shift.  The authors note how poorly designed the alerts are, how few are relevant, and “alert fatigue” – but really, if you’re getting any kind of alerts or have any EHR tools available to you during your practice, each time you dismiss one, someone could turn it around against you.

The authors potential solutions are an “expert” drug-drug interaction list or legislative legal safe harbors.

“Clinical Decision Support and Malpractice Risk.”
www.ncbi.nlm.nih.gov/pubmed/21730245