The first high-profile study of the year was the on-line posting of the Archives of Internal Medicine paper by Romano and Stafford , which I discussed in an earlier posting. The official publication of the paper, as well as letters about it, will be published in May, 2011.
Probably the next most high-profile study was the publication of an update of a systematic review of studies of outcomes from health information technology interventions by Buntin and colleagues . This was actually the second update of an original systematic review that was published in 2006 by Chaudhry and associates , the first update of which was published by Goldzweig and colleagues in 2009 .
Systematic reviews are comprehensive reviews of all research evidence on a given area or question . When studies are homogeneous enough (e.g., all studies assessing the treatment of hypertension to reduce cardiovascular disease), a mathematical technique known as meta-analysis can be performed to combine results across studies to achieve larger a sample size and more statistical power. But most areas, certainly so in informatics, have research questions too heterogeneous to enable use of meta-analysis. Nonetheless, studies can be categorized to look at general questions asked, such as efficacy of decision support to reduce medical error or access to data in a more timely manner to reduce cost of care.
The three successive systematic reviews [2-4] using relatively similar methodology have summarized outcomes of studies of health information technology (HIT) over particular time periods:
- Chaudhry, 2006 – studies from 1995-2004 
- Goldzweig, 2009 – studies from 2004-2007 
- Buntin, 2011 – studies from 2007-2010 
Chaudhry et al. identified 257 studies, with the most benefit shown for:
- Adherence to guideline-based care
- Enhanced surveillance and monitoring
- Decreased medical errors
In their update, Goldzweig et al. found 179 new studies. They noted comparable results to the study of Chaudhry et al., but also found an increased number of studies of patient-focused applications that ran external to EHR, e.g., Web-based care management. They note a small increase in the number of studies of commercial, off-the-shelf systems, though 20% of studies still came from the four leading institutions. They also found there was still a paucity of cost-benefit analyses.
In the new systematic review, Buntin et al. identified 154 new studies with 278 individual outcome measures. While acknowledging wide divergence of study quality and methodologies, not to mention outcomes studied, they noted that 96 (62%) of studies had positive improvement in one or more aspects of care, with 142 (92%) showing positive or mixed positive-negative outcomes. They found that the studies used quantitative and qualitative approaches, with those using statistical hypothesis testing more likely to have positive outcomes. They slightly redefined “health IT leader” institutions, but noted that a large number (28) still came from these institutions, but did decreased somewhat to 18% of the studies. Somewhat reassuring was that the “leader” studies did not differ in methods or results from the other studies.
Buntin et al. grouped the outcomes into seven categories, noting document improvement in all of them:
- Access to care
- Preventive care
- Care process
- Patient satisfaction
- Provider satisfaction
- Effectiveness of care
- Efficiency of care
It should be noted that another line of thought has been critical of the experimental approach to evaluation of health IT. Two recent commentaries note that these approaches cannot capture the whole picture of a health IT intervention, especially ones that occur in real-world implementations in complex settings, like states or even whole countries [8, 9]. I acknowledge these criticisms, though would argue back that we should not view these approaches as either-or. There is hopefully plenty of room for all types of disciplined evaluation of informatics, with clinical trials and similar experiments
1. Romano, M. and Stafford, R. (2011). Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Archives of Internal Medicine, Epub ahead of print.
2. Buntin, M., Burke, M., et al. (2011). The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Affairs, 30: 464-471.
3. Goldzweig, C., Towfigh, A., et al. (2009). Costs and benefits of health information technology: new trends from the literature. Health Affairs, 28: w282-w293.
4. Chaudhry, B., Wang, J., et al. (2006). Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine, 144: 742-752.
5. Anonymous (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC, Institute of Medicine.
6. Black, A., Car, J., et al. (2011). The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Medicine, 8(1): e1000387.
7. Hersh, W., Hickam, D., et al. (2006). Diagnosis, access, and outcomes: update of a systematic review on telemedicine services. Journal of Telemedicine & Telecare, 12(Supp 2): 3-31.
8. Greenhalgh, T. and Russell, J. (2010). Why do evaluations of eHealth programs fail? An alternative set of guiding principles. PLoS Medicine, 7(11): e1000360.
9. Patrick, J. (2011). The validity of personal experiences in evaluating HIT. Applied Clinical Informatics, 1: 462-465.