The headlines have blared the news this week that a new study published in Archives of Internal Medicine showed that electronic health records (EHRs) in the ambulatory setting do not appear to lead to higher quality patient care [1]. This in turn has led many leading news organizations to have stories with headlines such as, Stanford researchers find EHRs don't boost care quality.
For those of us who work in informatics, this is a pretty serious finding. As responsible scientists and citizens, we cannot ignore negative results about the work we do. However, we also have an obligation to place this work in the larger context of all research on the relationship between health information technology (HIT) and quality of medical care.
Like almost all science that gets reported in the general media, there is more to this study than what is described in the headlines and news reports. The study was published in a prestigious medical journal by two Stanford researchers. The implementation of the research methods they used appears to be sound. There is no reason to believe that the results obtained do not derive from the methods used.
However, there are serious limitations to this type of study and to the data resources used to answer the researchers' question, which was whether ambulatory EHRs that include clinical decision support (CDS) lead to improved quality of medical care delivered. While I do believe this study has a place in the evidence base of HIT, it suffers from limitations that are inherent in studies like this that that are observational, correlational, and retrospective. This study used a data source collected for other purposes, the National Ambulatory Medical Care Survey, and compared physicians who were identified users of CDS with those who were not to see if there were differences in the quality of care they provided based on 20 process quality measures. The results found there were no differences between the groups, i.e., those using EHRs and CDS did not deliver higher quality care than those not using them.
Before delving into the details, it is also worth noting that this study is not the first to apply this methodology. Within the last couple months, two other studies have used a similar approach to assess associations between quality measures and hospital EHR adoption [2] and computerized provider order entry [3], giving mixed results, i.e., some measures showing benefit and others not. In addition, Archives of Internal Medicine published another study using this sort of approach in 2009 showing that hospital notes, test results, order entry, and decision support were variably associated with improved patient outcomes and reduced costs [4]. (It was surprising to see the latter not referenced in the article.) If we were to take all of these studies as definitive, then we might conclude that EHRs usage in hospitals improves quality of care, even if EHR usage does not in ambulatory settings.
But whether the results are favorable or not, it is important to understand some serious limitations in these types of studies and this one in particular. A first limitation is that the study looks at correlation, which does not mean causality. This was an observational and not an experimental study. The data used for the study was not collected for the purposes of assessing the quality of care by EHRs. As with any correlational study, there may be confounders that cause the correlation or lack of it. As we know from evidence-based medicine, the best study design to assess causality is an experimental randomized controlled trial. Indeed, such studies have been done and many have found that EHRs do lead to improvements in quality of care. There have been several systematic reviews of such studies noting that while some of the studies suffer from methodologic limitations, others are well-designed and do demonstrate positive value for various aspects of EHR and CDS [5-7]. There is a continuing stream of such studies and two have been published in the last couple months. One showed that an EHR with targeted CDS led to increased improvements in glucose control in diabetics [8], while another found that a real-time alert cut inappropriate use of D-dimer testing by 70% [9]. Not all such studies demonstrate positive results, but enough to do to show that there is value in the well-informed use of HIT.
A second limitation of this study is the quality measures used. Quality measures are of two general types, process and outcome. Process measures look at what was done, such as ordering a certain test or prescribing a specific treatment. Outcome measures look at the actual clinical outcomes of the patient, e.g., whether there was a reduction of mortality, complications, or cost. It is a fair criticism of the current state of the healthcare quality movement that most measures used (including those in the meaningful use criteria) are process measures that may or may not result in improved patient outcomes.
A third limitation of this study is that we do not know whether the physicians using EHRs and CDS had decision support in place to impact the specific quality measures that the researchers studied. While the quality measures are important process measures for physicians to adhere to, they may not be amenable to CDS generally or the CDS used in these systems.
A fourth limitation is that the study assesses episodes of care and not longitudinal care over time. This may not portray an accurate picture of a physician's practice.
A fifth limitation is that the data analyzed was collected in 2005-2007. While EHRs and CDS were available at that time, they were less widely used and less mature at that time.
Finally, we have no idea how well trained these physicians were at using the CDS that they had. We know that success with HIT is based on many factors that go well beyond the technology itself, such as proper implementation and training. Well-designed research must address these factors too.
There are also some other limitations to the study that are discussed in an accompanying editorial that unfortunately few people, especially those who get news of the study from news reports as opposed to the journal itself, will read. The editorial writers appropriately point out that other studies, including experimental ones, have shown value for HIT interventions.
The results of this study are a legitimate addition to the evidence base of informatics and cannot be dismissed out of hand. However, these findings must take their place in the proper context of all research on HIT. If nothing else, this study highlights the need for more and better research to truly identify where HIT helps, has no impact, or outright harms patients.
References
1. Romano MJ and Stafford RS, Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Archives of Internal Medicine, 2011: Epub ahead of print.
2. Jones SS, Adams JL, Schneider EC, Ringel JS, and McGlynn EA, Electronic health record adoption and quality improvement in US hospitals. American Journal of Managed Care, 2010. 16: SP64-SP72.
3. Kazley AS and Diana ML, Hospital computerized provider order entry adoption and quality: an examination of the United States. Health Care Management Review, 2011. 36: 86-94.
4. Amarasingham R, Plantinga L, Diener-West M, Gaskin DJ, and Powe NR, Clinical information technologies and inpatient outcomes: a multiple hospital study. Archives of Internal Medicine, 2009. 169: 108-114.
5. Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al., Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. Journal of the American Medical Association, 2005. 293: 1223-1238.
6. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al., Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine, 2006. 144: 742-752.
7. Goldzweig CL, Towfigh A, Maglione M, and Shekelle PG, Costs and benefits of health information technology: new trends from the literature. Health Affairs, 2009. 28: w282-w293.
8. O'Connor PJ, Sperl-Hillen JM, Rush WA, Johnson PE, Amundson GH, Asche SE, et al., Impact of electronic health record clinical decision support on diabetes care: a randomized trial. Annals of Family Medicine, 2011. 9: 12-21.
9. Palen TE, Price DW, Snyder AJ, and Shetterly SM, Computerized alert reduced D-dimer testing in the elderly. American Journal of Managed Care, 2010. 16: e267-e275.
Subscribe to:
Post Comments (Atom)
Thank you for this thoughtful and timely analysis. You added a lot of weight to the discussion and certainly put things in perspective.
ReplyDeleteGreat post Bill!
Right on, Bill. I was going to blog to a similar effect myself, but after reading Clem's accompanying editorial and your blog post, I see no need. You guys said all that needs to be said and said it well. --Adam Rothschild, M.D., M.A.
ReplyDeleteExcellent points that had to put out there. It helps us all to be able to refer to well stated points when we are confronted with these articles.
ReplyDeleteThank you for posting a well reasoned response, with citations, in a public forum.
ReplyDeleteIt would be a disservice if these sorts of things only appeared in academic journals and conferences.
Nice summary of the issues, Bill. Thanks for taking the time to put this in context. Peter J. Embi, MD, MS
ReplyDeleteWell responded, and extremely useful as the debate is affecting us quite hefty these days. I'd like to quote you in my own blog discussing the matter, as I share every single point you discussed. Michael Dahlweid, MD, PhD
ReplyDeleteExcellent post, Bill.
ReplyDeleteSome additional thoughts including context and external validity..
http://www.sharedhealthdata.com/2011/01/30
Sue Woods