Wednesday, April 20, 2011

What is the Evidence Base for Informatics, Health IT, and Related Areas? Some Recent Analyses

The first part of 2011 has brought a number of publications, and subsequent discussion, about the "evidence base" for the efficacy of biomedical and health informatics interventions, including electronic health records. These publications and conversations come against a backdrop of a very poisoned political environment in the United States, where everything about healthcare, including informatics, has become unfortunately very politicized. In this posting, however, I will stick to the science.

The first high-profile study of the year was the on-line posting of the Archives of Internal Medicine paper by Romano and Stafford [1], which I discussed in an earlier posting. The official publication of the paper, as well as letters about it, will be published in May, 2011.

Probably the next most high-profile study was the publication of an update of a systematic review of studies of outcomes from health information technology interventions by Buntin and colleagues [2]. This was actually the second update of an original systematic review that was published in 2006 by Chaudhry and associates [3], the first update of which was published by Goldzweig and colleagues in 2009 [4].

Systematic reviews are comprehensive reviews of all research evidence on a given area or question [5]. When studies are homogeneous enough (e.g., all studies assessing the treatment of hypertension to reduce cardiovascular disease), a mathematical technique known as meta-analysis can be performed to combine results across studies to achieve larger a sample size and more statistical power. But most areas, certainly so in informatics, have research questions too heterogeneous to enable use of meta-analysis. Nonetheless, studies can be categorized to look at general questions asked, such as efficacy of decision support to reduce medical error or access to data in a more timely manner to reduce cost of care.

The three successive systematic reviews [2-4] using relatively similar methodology have summarized outcomes of studies of health information technology (HIT) over particular time periods:
  • Chaudhry, 2006 – studies from 1995-2004 [3]
  • Goldzweig, 2009 – studies from 2004-2007 [4]
  • Buntin, 2011 – studies from 2007-2010 [2]
As with most systematic reviews, these captured a broad net of literature and reviewed it for quality of methodology and its results.

Chaudhry et al. identified 257 studies, with the most benefit shown for:
  • Adherence to guideline-based care
  • Enhanced surveillance and monitoring
  • Decreased medical errors
An interesting caveat of the results that the authors noted was that 25% of the identified studies came from four institutions (Partners Healthcare, Veteran's Administration, Indiana University/Regenstrief Institute, and Vanderbilt University) and there were few studies of commercial systems, raising concerns about generalizability.

In their update, Goldzweig et al. found 179 new studies. They noted comparable results to the study of Chaudhry et al., but also found an increased number of studies of patient-focused applications that ran external to EHR, e.g., Web-based care management. They note a small increase in the number of studies of commercial, off-the-shelf systems, though 20% of studies still came from the four leading institutions. They also found there was still a paucity of cost-benefit analyses.

In the new systematic review, Buntin et al. identified 154 new studies with 278 individual outcome measures. While acknowledging wide divergence of study quality and methodologies, not to mention outcomes studied, they noted that 96 (62%) of studies had positive improvement in one or more aspects of care, with 142 (92%) showing positive or mixed positive-negative outcomes. They found that the studies used quantitative and qualitative approaches, with those using statistical hypothesis testing more likely to have positive outcomes. They slightly redefined “health IT leader” institutions, but noted that a large number (28) still came from these institutions, but did decreased somewhat to 18% of the studies. Somewhat reassuring  was that the “leader” studies did not differ in methods or results from the other studies.

Buntin et al. grouped the outcomes into seven categories, noting document improvement in all of them:
  • Access to care
  • Preventive care
  • Care process
  • Patient satisfaction
  • Provider satisfaction
  • Effectiveness of care
  • Efficiency of care
Another bit of evidence from early 2011 was a review of all eHealth systematic reviews took exception to direction and quality of evidence [6]. The authors note that many studies of eHealth, including clinical applications (i.e., health IT), had poor methodology, raising concern over validity of the results. The results echo those of a systematic review I led about telemedicine studies several years ago [7]. One concern about this new review is that its methodology of being a review of reviews might magnify poor evidence. But someone needs to reconcile this review with the one of Buntin et al. [2].

It should be noted that another line of thought has been critical of the experimental approach to evaluation of health IT. Two recent commentaries note that these approaches cannot capture the whole picture of a health IT intervention, especially ones that occur in real-world implementations in complex settings, like states or even whole countries [8, 9]. I acknowledge these criticisms, though would argue back that we should not view these approaches as either-or. There is hopefully plenty of room for all types of disciplined evaluation of informatics, with clinical trials and similar experiments

References

1. Romano, M. and Stafford, R. (2011). Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Archives of Internal Medicine, Epub ahead of print.
2. Buntin, M., Burke, M., et al. (2011). The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Affairs, 30: 464-471.
3. Goldzweig, C., Towfigh, A., et al. (2009). Costs and benefits of health information technology: new trends from the literature. Health Affairs, 28: w282-w293.
4. Chaudhry, B., Wang, J., et al. (2006). Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine, 144: 742-752.
5. Anonymous (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC, Institute of Medicine.
6. Black, A., Car, J., et al. (2011). The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Medicine, 8(1): e1000387.
7. Hersh, W., Hickam, D., et al. (2006). Diagnosis, access, and outcomes: update of a systematic review on telemedicine services. Journal of Telemedicine & Telecare, 12(Supp 2): 3-31.
8. Greenhalgh, T. and Russell, J. (2010). Why do evaluations of eHealth programs fail? An alternative set of guiding principles. PLoS Medicine, 7(11): e1000360.
9. Patrick, J. (2011). The validity of personal experiences in evaluating HIT. Applied Clinical Informatics, 1: 462-465.

No comments:

Post a Comment