Two local informatics-related happenings recently provided teachable moments demonstrating why a comprehensive approach to standards and interoperability is so critical for realizing the value of health IT. Fortunately, the
Office of the National Coordinator for Health IT (ONC) has prioritized interoperability among its activities moving forward, and other emerging work on standards provides hope that the problems I will described that occurred locally (and I know occur many other places) might be avoided in the future.
One of the local happenings came from a cardiology-related project that has been trying to improve performance on quality of care measures. As a starting point, the cardiology group wanted to precisely identify the measure of left ventricular ejection fraction (LVEF) from data in its organization's electronic health record (EHR) system. LVEF is an important number for stratifying patients with congestive heart failure (CHF), thus allowing better assessment of the appropriateness of their medical management. The value for LVEF is a number that can be measured in multiple ways, most commonly via an echocardiogram test that uses sound waves to show contraction of the heart wall muscles.
One might think that recording LVEF in an EHR is a relatively straightforward task. Unfortunately, the number itself is not always reported as a single number, but sometimes as a range (e.g., 35-40%) or as a cut-point (e.g., < 25%). Furthermore, different physician groups in the organization (e.g., cardiologists, family physicians, internists, etc.) tend to report LVEF in different stylistic ways. An obvious solution to recording LVEF consistently and accurately might be to designate a specific field in the EHR, although getting all clinicians and technicians in an organization to use such a field properly is not always easy.
The second happening came from a cancer-related project. This institution's cancer center treats both patients who receive all their care within the institution as well as those who are referred from external practices or centers. While the patients getting all their care in the institution have laboratory data in the institutional EHR, the latter come with records that are formatted in different ways in different types of media. Data come in a whole gamut of forms, from being structured electronically to residing in semi-formatted electronic documents to being on scanned document images (PDFs). With the move to personalized medicine, the cancer center desires every data point in electronic form. Even when data are in somewhat structured electronic forms, there is inconsistent use of standards for formatting of data and/or naming of tests. While standards such as
LOINC provide format and terminology standardization, not all centers use it, which results in inconsistent formatting and naming of structured data.
Seeking solutions for having lab data in a more consistent format and structure, an external developer was engaged and demonstrated software tools, including those using natural language processing (NLP), that it could employ to decode the data and put into standardized form. There is no question that the cancer center needs to get the data it requires here and now, but it really should not be necessary and would be an unneeded expense if the healthcare industry were to adopt and universally use standards for laboratory and other data. It is unfortunate that healthcare organizations have to spend money on a decoding process that can be likened to
unscrambling an egg. It is a waste of time and money to try to reconstitute data that was once structured in a laboratory information system or EHR, and is now in free-text form, or even worse in a scanned image.
This problem is unfortunately not unique to laboratory data. This same problem applies to other types of data, such as pharmacy data, which not only has the same naming and formatting problems but also the addition of data provenance, i.e., what does the data mean. We know that there is drop-off in the proportion of patients who are given prescriptions and those who actually fill them, and then another drop-off among those who fill prescriptions and who actually take the medication [1]. Determining that a patient is actually taking a drug is not a simple matter of seeing if it was mentioned in the physician plan, generated as a prescription, or even filled at a pharmacy. This impacts all aspects of care, but especially downstream applications of the data removed from the care process, such as research or quality measurement.
Therefore while NLP can certainly help in decoding some aspects of the medical record, I believe it is a waste of time and money to try to use it to unscramble eggs. This is another reason why the need for data to adhere to standards and to be interoperable is becoming imperative.
Fortunately, interoperability has become a major priority for ONC, which has launched a process to develop a
"shared, nationwide roadmap" to achieving it. This process began earlier in 2014 with the release of a 10-year vision to achieve an interoperable health infrastructure [2]. Subsequently, a process has been launched to develop an explicit roadmap with milestones for three, six, and ten years [3].
Many factors spurred the ONC into action. One was a report last year noting that while adoption of EHRs has been very high, especially in hospitals, there has been much less uptake of health information exchange (HIE) [3]. In addition, earlier this year, a report commissioned by the
Agency for Healthcare Quality & Research (AHRQ) was produced by
JASON, an independent group of scientists that advises the US government on science and technology issues [4]. The JASON report noted many of the flaws in the current health IT environment, especially the factors impeding interoperability and, as a result, HIE. Part of the ONC action includes a task force to address the issues raised by the JASON report.
The JASON report laments the lack of an architecture supporting standardized application programming interfaces (APIs), which allow interoperating computer programs to call each other and access each other's data. The report also criticizes current EHR vendor technology and business practices, which they call impediments to achieving interoperability. The report recommends a new focus on creating a "unifying software architecture" that will allow migration of data from legacy systems to a new "centrally orchestrated architecture" that will better serve clinical care, research, and patient uses. It proposes that this architecture be based on a set of public APIs for access to clinical documents and discrete data from EHRs, combined with increased consumer control of how data is used.
In addition, the JASON report advocates a transition toward more finely granular data, which the task force views as akin to going from structured documents, such as Consolidated Clinical Document Architecture (CCDA), to more discrete data elements. One new standards activity that may enable this move to more discrete data that is formatted in consistent ways is Fast Health Interoperability Resources (FHIR) [5]. FHIR is viewed by some as an API into structured discrete elements that presumably will adhere to terminology standards, thus potentially playing a major role in efforts to achieve data interoperability [6]. The HL7 Web site has a very readable and informative overview of FHIR from a clinical perspective [7].
It is easy to see how the interoperability work described in the second half of this posting, if implemented properly and successfully, could go a long way to solving the two problems described in the first half. Having a reliable way to define the format and naming of LVEF and laboratory results would enable cardiology groups to improve (among other things) quality measurement and oncology groups to march forward toward the vision of personalized medicine.
References
1. Tamblyn, R, Eguale, T, et al. (2014). The incidence and determinants of primary nonadherence with prescribed medication in primary care: a cohort study.
Annals of Internal Medicine. 160: 441-450.
2. DeSalvo, KB (2014). Developing a Shared, Nationwide Roadmap for Interoperability.
Health IT Buzz, August 6, 2014.
http://www.healthit.gov/buzz-blog/from-the-onc-desk/developing-shared-nationwide-roadmap-interoperability/.
3. Anonymous (2013).
Principles and Strategy for Accelerating Health Information Exchange (HIE). Washington, DC, Department of Health and Human Services.
http://www.healthit.gov/sites/default/files/acceleratinghieprinciples_strategy.pdf.
4. Anonymous (2014).
A Robust Health Data Infrastructure. McLean, VA, MITRE Corp.
http://healthit.gov/sites/default/files/ptp13-700hhs_white.pdf.
5. Slabodkin, G (2014). FHIR Catching On as Open Healthcare Data Standard.
Health Data Management, September 4, 2014.
http://www.healthdatamanagement.com/news/FHIR-Catching-On-as-Open-Healthcare-Data-Standard-48739-1.html.
6. Munro, D (2014). Setting Healthcare Interop On Fire.
Forbes, March 30, 2014.
http://www.forbes.com/sites/danmunro/2014/03/30/setting-healthcare-interop-on-fire/.
7. Anonymous (2014).
FHIR for Clinical Users. Ann Arbor, MI, Health Level 7.
http://wiki.hl7.org/index.php?title=FHIR_for_Clinical_Users.