Tuesday, September 30, 2014

Milestones and Greatest Hits for the Informatics Professor Blog

In recent months, this blog has hit several numerical milestones. Over the summer, the blog surpassed 200,000 page views since its inception in January, 2009. The blog now has over 400 followers who regularly get updated about new postings, not to mention those who follow it via Twitter feeds (@OHSUInformatics and @williamhersh, and their numerous retweets), Facebook postings (myself and the various OHSU groups), and a number of sites that repost entries (HITECH Answers, Health Data Management, the American College of Physicians, and others). In addition, the blog recently surpassed 200 postings dating back to early 2009. I am not a "stream of consciousness" type of blogger, but instead only post when I believe I have something interesting and coherent to say.

Perhaps this is a time to reflect back and consider, what are this blog's "greatest hits?" Many of my postings have news pegs that lose longevity over time. But others I consider to be essays of more enduring value. Here is a list of those, which I might consider my all-time greatest hits (and not necessarily those with the most page views):

Thursday, September 25, 2014

Continued Good News for the Health IT Workforce

The job and career opportunities in health information technology (HIT) continue to grow, even though we are reaching the end of the "stimulus" of the Health Information Technology for Economic and Clinical Health (HITECH) Act. Two recent surveys from HIMSS Analytics and HealthITJobs.com show that the bullish attitude I maintain about jobs and careers in HIT and informatics is warranted.

The HIMSS Analytics Survey queried 200 senior executives form healthcare provider and vendor organizations. About 79% reported plans to hire in the following year in last year's (2013) survey,with 84% reporting that they did actually hire during that year. About 82% report planning to hire in the coming year, with about half planning to hire 1-5 FTE and the remainder planning to hire more (10% plan to hire more than 20 FTE!).

The top hiring needs for provider organizations in the past year were in:
  • Clinical Application Support - 64%
  • Help Desk - 57%
  • IT Management - 45%
  • Project Management - 35%
  • IT Security - 34%
The top hiring areas for vendors and consultants were:
  • Sales/Marketing Team - 78%
  • Field Support Staff - 75%
  • Support Staff - 73%
  • Executive Team - 60%
Similar to other surveys in the past, this one continued to show ramifications to organizations due to lack of adequate or qualified staff. About 35% of organizations reported projects being put on hold due to lack of staff, with 38% reported scaling back IT projects for the same reason.

The HealthITJobs.com survey focused more on salaries. It found an average salary of near $90K, with 30% of respondents reporting receiving a bonus at an average of around $13K. Salaries were highest among the following types of positions:
  • Project managers - $111K
  • Healthcare informatics - $94K
  • Systems analyst - $82K
  • Implementation consultant - $81K
  • Clinical applications - $78K
  • Training - $74K
Not surprisingly, salary increased with experience and was also higher for those with healthcare IT experience ($89K) than without ($54). Certification was also associated with higher earnings. Salary varied by geographic region (highest in the Mid-Atlantic and lowest in the Midwest and Southeast) and by EHR vendor experience (highest for Epic and lowest for Allscripts and Meditech). About 80% reported job satisfaction, with the most common reasons being ability to learn new skills, ability to advance careers, and income potential.

These surveys show that informatics continues to be a rewarding career, with good pay and strong job satisfaction. Nothing is certain in healthcare, but the opportunities for careers in informatics will likely be strong in the foreseeable future.

Tuesday, September 23, 2014

Clinfowiki Returns to OHSU

Back in 2005, when he was still a faculty member at Oregon Health & Science University (OHSU), Dean Sittig, PhD established the Clinical Informatics Wiki (ClinfoWiki), a wiki devoted to topics in Clinical Informatics. The Clinfowiki site has been popular over the years, accumulating over 11 million page views. The building out of Clinfowiki was achieved in part by content added by OHSU students for their course project in Clinical Information Systems (BMI 512), a course in OHSU's biomedical informatics graduate program.

When Dr. Sittig moved on to become a Professor at the University of Texas School of Health Information Sciences at Houston, he maintained his role in Clinfowiki but also brought on help from Vishnu Mohan, MD, MBI, a new informatics faculty at OHSU. Dr. Mohan took over teaching the Clinical Information Systems course and continued the Clinofwiki assignment in the class. Some of Dr. Sittig's students at his new university added content as well, as did people from other places who signed up for editing privileges.

Through the course and others who have added content, the wiki currently contains 866 content topics, with over 3000 pages of information. Over 1000 registered users have contributed over 16,500 page edits since Clinowiki was launched.

For this who wish to add or modify content, the Log In/Create Account link at the top right of the screen provides access to a form where individuals can request an account with editing privileges.

Clinfowiki, like many good wikis, represents a stellar example of collaborative knowledge resource development. We hope to see it continue to grow and serve as a useful resource in clinical informatics.

Sunday, September 14, 2014

Efficacy Is Not Leading to Effectiveness: The Dichotomy of Health Information Technology

I often get involved in debates about the value of health information technology (HIT) interventions in healthcare. While the optimist in me likes to point to the growing body of scientific evidence showing efficacy, the realist in me takes seriously the negative outcomes that some studies as well as reported experiences show. This leads to a question that some may ask, which is why does there exist this apparent dichotomy of scientific evidence supporting the use of HIT in the face of widespread dissatisfaction with it in many settings?

A number of "negative" studies have appeared in recent months [1, 2], although these studies have some significant methodologic limitations that I will describe further below. In addition, the scientific basis for use of HIT remains strong. Systematic reviews in recent years have concluded its value, whether approached from the standpoint of clinical outcomes [3] or meaningful use criteria [4]. Nonetheless, there is widespread dissatisfaction among many users of HIT, especially physicians, as exemplified in a couple surveys published by the magazine Medical Economics last year [5]. The advocacy of esteemed groups such as the Institute of Medicine for more study and regulation around HIT safety demonstrates that such problems are real [6].

While some in the informatics field point to more nefarious reasons for this apparent dichotomy, such as financial motivations by those who stand to benefit, i.e., EHR vendors, I believe that HIT has a fundamental difficulty in translating efficacy into effectiveness. The difference between efficacy and effectiveness is a well-known concept in clinical epidemiology, and is best demonstrated that some clinical interventions (tests, treatments, etc.) work well in highly controlled settings, such as well-resourced academic medical centers or when limited to patient populations that lack co-morbid conditions that most patients in the healthcare system typically have [7].

It is also worthwhile to delve further into the methodology of some of these negative studies, especially in the current highly charged political environment around HIT, including its role in healthcare reform. Take the study of Samal et al. [1]. This investigation compared the quality of care as measured by performance on mostly process-based quality measures in a single organization between physicians who achieved Stage 1 of meaningful use vs. those who did not. There are all sorts of issues whether quality measures unrelated to an EHR intervention are a good measure of an EHR system's value. There is also an inconsistent relationship between performance on quality measures and patient outcomes from care [8].

The study by McDonald et al. surveyed internal medicine physicians about various aspects of EHR use, such as whether it added or diminished free time [2]. Nearly 60% of respondents indicated EHR use reduced free time by an average of 77.5 minutes per day. Although many other variables were assessed, such as EHR vendor as well as practice size and setting, there was no analysis of which of these factors may have impacted free time. In particular, it would be interesting to compare the 60% who reported losing time with the 15% who said EHRs made them more efficient and the 26% who said that the time change was neutral. What was it about the physicians who did not lose time with their EHRs that made them different from their colleagues who claimed lost time? Was it their vendor? Or their practice situation or size? Or maybe even the availability of clinical informatics expertise guiding them.

Another concern about this study is that it was a recall-based survey. What would have been more useful was the use of real time-motion studies. These have been done in the past, and the added time is minimal [9]. It would also have been good to ask these physicians if they wanted to return to the days of paper records, with their illegibility, inaccessibility, and other problems.

I am in no way arguing that negative studies of EHR should be discounted. But like all areas of scientific study, we must weigh all the evidence. It is clear that a major challenge to HIT is how to translate efficacy into effectiveness. This requires research looking at why its benefits are not readily generalizable to different settings. Such studies need to assess all possible factors, from healthcare setting type to physician characteristics to the availability of suitable informatics expertise. We must also not lose sight of what we are trying to improve with HIT, namely a healthcare system that is unsafe, wasteful, and achieves suboptimal outcomes [10].

References

1. Samal, L, Wright, A, et al. (2014). Meaningful use and quality of care. JAMA Internal Medicine. 174: 997-998.
2. McDonald, CJ, Callaghan, RM, et al. (2014). Use of internist's free time by ambulatory care electronic medical record systems. JAMA Internal Medicine: Epub ahead of print.
3. Buntin, MB, Burke, MF, et al. (2011). The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Affairs. 30: 464-471.
4. Jones, SS, Rudin, RS, et al. (2014). Health information technology: an updated systematic review with a focus on meaningful use. Annals of Internal Medicine. 160: 48-54.
5. Verdon, DR (2014). Physician outcry on EHR functionality, cost will shake the health information technology sector. Medical Economics, February 10, 2014. http://medicaleconomics.modernmedicine.com/medical-economics/news/physician-outcry-ehr-functionality-cost-will-shake-health-information-technol.
6. Anonymous (2012). Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC, National Academies Press.
7. Singal, AG, Higgins, PDR, et al. (2014). A primer on effectiveness and efficacy trials. Clinical and Translational Gastroenterology. 5: e45. http://www.nature.com/ctg/journal/v5/n1/full/ctg201313a.html.
8. Houle, SK, McAlister, FA, et al. (2012). Does performance-based remuneration for individual health care practitioners affect patient care?: a systematic review. Annals of Internal Medicine. 157: 889-899.
9. Overhage, JM, Perkins, S, et al. (2001). Controlled trial of direct physician order entry: effects on physicians' time utilization in ambulatory primary care internal medicine practices. Journal of the American Medical Informatics Association. 8: 361-371.
9. Smith, M, Saunders, R, et al. (2012). Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC, National Academies Press.

Saturday, September 6, 2014

Unscrambling Eggs and the Need for Comprehensive Data Standards and Interoperability

Two local informatics-related happenings recently provided teachable moments demonstrating why a comprehensive approach to standards and interoperability is so critical for realizing the value of health IT. Fortunately, the Office of the National Coordinator for Health IT (ONC) has prioritized interoperability among its activities moving forward, and other emerging work on standards provides hope that the problems I will described that occurred locally (and I know occur many other places) might be avoided in the future.

One of the local happenings came from a cardiology-related project that has been trying to improve performance on quality of care measures. As a starting point, the cardiology group wanted to precisely identify the measure of left ventricular ejection fraction (LVEF) from data in its organization's electronic health record (EHR) system. LVEF is an important number for stratifying patients with congestive heart failure (CHF), thus allowing better assessment of the appropriateness of their medical management. The value for LVEF is a number that can be measured in multiple ways, most commonly via an echocardiogram test that uses sound waves to show contraction of the heart wall muscles.

One might think that recording LVEF in an EHR is a relatively straightforward task. Unfortunately, the number itself is not always reported as a single number, but sometimes as a range (e.g., 35-40%) or as a cut-point (e.g., < 25%). Furthermore, different physician groups in the organization (e.g., cardiologists, family physicians, internists, etc.) tend to report LVEF in different stylistic ways. An obvious solution to recording LVEF consistently and accurately might be to designate a specific field in the EHR, although getting all clinicians and technicians in an organization to use such a field properly is not always easy.

The second happening came from a cancer-related project. This institution's cancer center treats both patients who receive all their care within the institution as well as those who are referred from external practices or centers. While the patients getting all their care in the institution have laboratory data in the institutional EHR, the latter come with records that are formatted in different ways in different types of media. Data come in a whole gamut of forms, from being structured electronically to residing in semi-formatted electronic documents to being on scanned document images (PDFs). With the move to personalized medicine, the cancer center desires every data point in electronic form. Even when data are in somewhat structured electronic forms, there is inconsistent use of standards for formatting of data and/or naming of tests. While standards such as LOINC provide format and terminology standardization, not all centers use it, which results in inconsistent formatting and naming of structured data.

Seeking solutions for having lab data in a more consistent format and structure, an external developer was engaged and demonstrated software tools, including those using natural language processing (NLP), that it could employ to decode the data and put into standardized form. There is no question that the cancer center needs to get the data it requires here and now, but it really should not be necessary and would be an unneeded expense if the healthcare industry were to adopt and universally use standards for laboratory and other data. It is unfortunate that healthcare organizations have to spend money on a decoding process that can be likened to unscrambling an egg. It is a waste of time and money to try to reconstitute data that was once structured in a laboratory information system or EHR, and is now in free-text form, or even worse in a scanned image.

This problem is unfortunately not unique to laboratory data. This same problem applies to other types of data, such as pharmacy data, which not only has the same naming and formatting problems but also the addition of data provenance, i.e., what does the data mean. We know that there is drop-off in the proportion of patients who are given prescriptions and those who actually fill them, and then another drop-off among those who fill prescriptions and who actually take the medication [1]. Determining that a patient is actually taking a drug is not a simple matter of seeing if it was mentioned in the physician plan, generated as a prescription, or even filled at a pharmacy. This impacts all aspects of care, but especially downstream applications of the data removed from the care process, such as research or quality measurement.

Therefore while NLP can certainly help in decoding some aspects of the medical record, I believe it is a waste of time and money to try to use it to unscramble eggs. This is another reason why the need for data to adhere to standards and to be interoperable is becoming imperative.

Fortunately, interoperability has become a major priority for ONC, which has launched a process to develop a "shared, nationwide roadmap" to achieving it. This process began earlier in 2014 with the release of a 10-year vision to achieve an interoperable health infrastructure [2]. Subsequently, a process has been launched to develop an explicit roadmap with milestones for three, six, and ten years [3].

Many factors spurred the ONC into action. One was a report last year noting that while adoption of EHRs has been very high, especially in hospitals, there has been much less uptake of health information exchange (HIE) [3]. In addition, earlier this year, a report commissioned by the Agency for Healthcare Quality & Research (AHRQ) was produced by JASON, an independent group of scientists that advises the US government on science and technology issues [4]. The JASON report noted many of the flaws in the current health IT environment, especially the factors impeding interoperability and, as a result, HIE. Part of the ONC action includes a task force to address the issues raised by the JASON report.

The JASON report laments the lack of an architecture supporting standardized application programming interfaces (APIs), which allow interoperating computer programs to call each other and access each other's data. The report also criticizes current EHR vendor technology and business practices, which they call impediments to achieving interoperability. The report recommends a new focus on creating a "unifying software architecture" that will allow migration of data from legacy systems to a new "centrally orchestrated architecture" that will better serve clinical care, research, and patient uses. It proposes that this architecture be based on a set of public APIs for access to clinical documents and discrete data from EHRs, combined with increased consumer control of how data is used.

In addition, the JASON report advocates a transition toward more finely granular data, which the task force views as akin to going from structured documents, such as Consolidated Clinical Document Architecture (CCDA), to more discrete data elements. One new standards activity that may enable this move to more discrete data that is formatted in consistent ways is Fast Health Interoperability Resources (FHIR) [5]. FHIR is viewed by some as an API into structured discrete elements that presumably will adhere to terminology standards, thus potentially playing a major role in efforts to achieve data interoperability [6]. The HL7 Web site has a very readable and informative overview of FHIR from a clinical perspective [7].

It is easy to see how the interoperability work described in the second half of this posting, if implemented properly and successfully, could go a long way to solving the two problems described in the first half. Having a reliable way to define the format and naming of LVEF and laboratory results would enable cardiology groups to improve (among other things) quality measurement and oncology groups to march forward toward the vision of personalized medicine.

References

1. Tamblyn, R, Eguale, T, et al. (2014). The incidence and determinants of primary nonadherence with prescribed medication in primary care: a cohort study. Annals of Internal Medicine. 160: 441-450.
2. DeSalvo, KB (2014). Developing a Shared, Nationwide Roadmap for Interoperability. Health IT Buzz, August 6, 2014. http://www.healthit.gov/buzz-blog/from-the-onc-desk/developing-shared-nationwide-roadmap-interoperability/.
3. Anonymous (2013). Principles and Strategy for Accelerating Health Information Exchange (HIE). Washington, DC, Department of Health and Human Services. http://www.healthit.gov/sites/default/files/acceleratinghieprinciples_strategy.pdf.
4. Anonymous (2014). A Robust Health Data Infrastructure. McLean, VA, MITRE Corp. http://healthit.gov/sites/default/files/ptp13-700hhs_white.pdf.
5. Slabodkin, G (2014). FHIR Catching On as Open Healthcare Data Standard. Health Data Management, September 4, 2014. http://www.healthdatamanagement.com/news/FHIR-Catching-On-as-Open-Healthcare-Data-Standard-48739-1.html.
6. Munro, D (2014). Setting Healthcare Interop On Fire. Forbes, March 30, 2014. http://www.forbes.com/sites/danmunro/2014/03/30/setting-healthcare-interop-on-fire/.
7. Anonymous (2014). FHIR for Clinical Users. Ann Arbor, MI, Health Level 7. http://wiki.hl7.org/index.php?title=FHIR_for_Clinical_Users.