I always use my last posting of the year to this blog to reflect on the year past. As I have noted each year, this blog started at the time of a major transformation for the informatics field, namely the Health Information Technology for Economic and Clinical Health (HITECH) Act. Now, almost 10 years later (10-year anniversary post coming in March!), the era of “meaningful use” is drawing to a close.
The year 2018 was a year of milestones and gratitude for me. I celebrated my 60th birthday, grateful for my personal health and well-being. My immediate family, my wife and two children, are also doing very well in their personal as well as professional lives. This year also marked the 15th year of the Department of Medical Informatics & Clinical Epidemiology (DMICE) in the School of Medicine at Oregon Health & Science University (OHSU), the department of which I have served as its one and only Chair. With DMICE, I am grateful not only of my own academic success but also providing an environment for faculty, students, and staff to achieve their accomplishments and gratitude.
Another milestone for 2018 was my 28th year at OHSU. It is somewhat uncommon these days for a high-profile academic to spend a whole career at a single institution. I have certainly been asked to look at other jobs over the years, as most academics always are, but nothing has ever appealed to me enough to consider leaving not only OHSU, but also Portland, Oregon. Since the Biomedical Information Communication Center (BICC) Building opened in 1991, I have had only two offices, and have been in my current one for over 20 years.
I am happy to report that despite my relatively static work location, I have changed and grown in place. In academia, like work in almost every other knowledge field, one must evolve their knowledge and skills with the evolution of their field. I am grateful that my job has afforded me the ability to grow professionally and intellectually. In fact, there are few things more exciting than being immersed in the field as new ideas and technologies emerge. A decade ago it was the emergence of the value of the electronic health record (EHR); today it is the growth of data and how we can put it to good use, such as via machine learning. But just as we learned with EHR adoption during the HITECH Act, implementing technology, especially in healthcare, does not always go according to plan. While the emergence of machine learning is exciting, it will be interesting to see how it will impact day-to-day medical practice.
Life does not last forever, but as long as I continue to enjoy my work and do it competently, I certainly have no plans to stop. It will also be interesting to see what new advances come down the pike in informatics, some of which we might be able to predict but others that will emerge out of nowhere.
This blog maintains the thoughts on various topics related to biomedical and health informatics by Dr. William Hersh, Professor, Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University.
Monday, December 31, 2018
Wednesday, December 19, 2018
Preserving What We Hold Dear About the Internet?
Hardly another day goes by without some explosive report in the news about modern Internet platforms and their adverse effects on our personal lives or on our political or economic systems. But with our personal and professionals lives so deeply intertwined with them, going “off the grid” is hardly an answer. How do we preserve what is good about our networked lives while trying to identify and eliminate the bad? I do not have answers but hope to raise discussion on the question.
Even though I am way too old to be a “digital native,” computers and the Internet have played a large role in my personal and professional life for several decades. I received my first Internet email address in the late 1980s as a postdoctoral fellow. I often tell the story of my jaw dropping the first time I saw the graphical Web browser, NCSA Mosiac, in 1992. While I had read articles about this new World Wide Web, I was initially skeptical because I could not envision the Internet of the time being able to support interaction (e.g., downloading and rendering Web pages) in real-time. But seeing Mosaic made me instantly realize how transformative the Web would be. Fastforwarding a few years, with the emergence of Google, I sometimes joke that my life would be very different had I come up with the idea of ranking Web search output by links in my own information retrieval research. At the end of the decade, a seemingly minor decision to put my course online in 1999 led to a major transformation of my career into a passion for educational technology. Now in modern times, my personal life has fused with Facebook, in which I can easily share parts of my life with family, friends, and colleagues. In addition, most of my teaching is online, I enjoy sharing running routes with fellow runners, and the ubiquitous worldwide reach of cellular and wifi makes travel and just about everything else much easier.
But clearly there are downsides to the Internet and the proliferation of all of our computational devices, including all of our data they hold, that are connected to it. The biggest current news, of course, is the manipulation of social media and search engines by the Russian government. Right behind that are other concerns about the business practices of Facebook and how they selectively share our data, especially with certain business partners. There are also concerns about the ease by which hate groups disseminate content to their followers, for example on YouTube and Twitter. Another worry is the growing commerce monopoly of Amazon, despite the fact many of us find it so convenient for many things we need. There is also the growing concern about what is done with the detailed digital activities of ours that are tracked and used, sometimes for good but other times not.
The solutions to these problems are not easy. Sure, we can try to maintain a balance between our real and virtual lives. We can consider more regulation of these platforms, but I get nervous when we discuss regulating free speech. The question is how to discern between freedom of expression versus not allowing manipulation of news and elections by “bots” and other approaches. Education is certainly important, making sure the general population understands how these platforms work and how they can be used to manipulate public and political opinion. There is also the question of how to economically regulate these platforms that achieve monopoly status. There is no question that these issues will attract further attention from the new media, lawmakers, and others going forward.
Even though I am way too old to be a “digital native,” computers and the Internet have played a large role in my personal and professional life for several decades. I received my first Internet email address in the late 1980s as a postdoctoral fellow. I often tell the story of my jaw dropping the first time I saw the graphical Web browser, NCSA Mosiac, in 1992. While I had read articles about this new World Wide Web, I was initially skeptical because I could not envision the Internet of the time being able to support interaction (e.g., downloading and rendering Web pages) in real-time. But seeing Mosaic made me instantly realize how transformative the Web would be. Fastforwarding a few years, with the emergence of Google, I sometimes joke that my life would be very different had I come up with the idea of ranking Web search output by links in my own information retrieval research. At the end of the decade, a seemingly minor decision to put my course online in 1999 led to a major transformation of my career into a passion for educational technology. Now in modern times, my personal life has fused with Facebook, in which I can easily share parts of my life with family, friends, and colleagues. In addition, most of my teaching is online, I enjoy sharing running routes with fellow runners, and the ubiquitous worldwide reach of cellular and wifi makes travel and just about everything else much easier.
But clearly there are downsides to the Internet and the proliferation of all of our computational devices, including all of our data they hold, that are connected to it. The biggest current news, of course, is the manipulation of social media and search engines by the Russian government. Right behind that are other concerns about the business practices of Facebook and how they selectively share our data, especially with certain business partners. There are also concerns about the ease by which hate groups disseminate content to their followers, for example on YouTube and Twitter. Another worry is the growing commerce monopoly of Amazon, despite the fact many of us find it so convenient for many things we need. There is also the growing concern about what is done with the detailed digital activities of ours that are tracked and used, sometimes for good but other times not.
The solutions to these problems are not easy. Sure, we can try to maintain a balance between our real and virtual lives. We can consider more regulation of these platforms, but I get nervous when we discuss regulating free speech. The question is how to discern between freedom of expression versus not allowing manipulation of news and elections by “bots” and other approaches. Education is certainly important, making sure the general population understands how these platforms work and how they can be used to manipulate public and political opinion. There is also the question of how to economically regulate these platforms that achieve monopoly status. There is no question that these issues will attract further attention from the new media, lawmakers, and others going forward.
Sunday, December 16, 2018
Kudos for the Informatics Professor - Fall 2018 Update
I had a busy summer and fall of 2018, with a number of talks and publications.
In September, I had the opportunity to be interviewed by The Jefferson Exchange, which is part of Jefferson Public Radio in southern Oregon (direct link to MP3 of interview).
I was also featured for the work I have contributed for over 10 years in partnership with the Emergency Medicine Informatics Section of the American College of Emergency Physicians (ACEP) to deliver a version of the 10x10 course. It was another successful year in general for the 10x10 course, with a total of 2517 people having completed the OHSU offering since 2005.
I was very busy at this year's American Medical Informatics Association (AMIA) Annual Symposium. I was among the 130 people inducted in the inaugural class of Fellows of AMIA (FAMIA). I also had a chance to describe our educational program at the Learning Showcase. In addition, I participated in a panel with three other academic colleagues entitled, Collaborative Science Within Academic Medical Centers: Opportunities and Challenges for Informatics.
I also had the opportunity to present OHSU Department of Medicine Grand Rounds on December 11, 2018, delivering the topic, Update in Clinical Informatics: Machine Learning, Interoperability, and Professional Opportunities (video and slides available).
Earlier in the year, I joined colleague Robert Hoyt, MD as a Co-Editor of the textbook, Health Informatics: Practical Guide (7th Edition), which is available both in print and eBook formats. The book is also available for the Amazon Kindle, as are other books of mine.
Also during this time period, I published a paper in the new AMIA journal, JAMIA Open.
In September, I had the opportunity to be interviewed by The Jefferson Exchange, which is part of Jefferson Public Radio in southern Oregon (direct link to MP3 of interview).
I was also featured for the work I have contributed for over 10 years in partnership with the Emergency Medicine Informatics Section of the American College of Emergency Physicians (ACEP) to deliver a version of the 10x10 course. It was another successful year in general for the 10x10 course, with a total of 2517 people having completed the OHSU offering since 2005.
I was very busy at this year's American Medical Informatics Association (AMIA) Annual Symposium. I was among the 130 people inducted in the inaugural class of Fellows of AMIA (FAMIA). I also had a chance to describe our educational program at the Learning Showcase. In addition, I participated in a panel with three other academic colleagues entitled, Collaborative Science Within Academic Medical Centers: Opportunities and Challenges for Informatics.
I also had the opportunity to present OHSU Department of Medicine Grand Rounds on December 11, 2018, delivering the topic, Update in Clinical Informatics: Machine Learning, Interoperability, and Professional Opportunities (video and slides available).
Earlier in the year, I joined colleague Robert Hoyt, MD as a Co-Editor of the textbook, Health Informatics: Practical Guide (7th Edition), which is available both in print and eBook formats. The book is also available for the Amazon Kindle, as are other books of mine.
Also during this time period, I published a paper in the new AMIA journal, JAMIA Open.
Tuesday, December 11, 2018
Response to NIH RFI: Proposed Provisions for a Draft Data Management and Sharing Policy for NIH Funded or Supported Research
Earlier this year, I submitted a response (and posted it in this blog) to a National Institutes of Health (NIH) Request for Information (RFI) on a draft of their Strategic Plan for Data Science. My main concern was that while there was nothing in the report I did not agree with, I believed there needed to be more attention to the science of data science.
In October, the NIH released another RFI, this one entitled, Proposed Provisions for a Draft Data Management and Sharing Policy for NIH Funded or Supported Research. Similar to the Strategic Plan for Data Science, most of what is in this draft plan is reasonable in my opinion. But what concerns me more is, similar to the earlier RFI, what is left out.
My main concerns have to do with the definition and use of “scientific data.” Early on, the plan defines “scientific data” as “the recorded factual material commonly accepted in the scientific community as necessary to validate and replicate research findings including, but not limited to, data used to support scholarly publications.” The draft further notes that “scientific data do not include laboratory notebooks, preliminary analyses, completed case report forms, drafts of scientific papers, plans for future research, peer reviews, communications with colleagues, or physical objects, such as laboratory specimens. For the purposes of a possible Policy, scientific data may include certain individual-level and summary or aggregate data, as well as metadata. NIH expects that reasonable efforts should be made to digitize all scientific data.”
The draft report then runs through the various provisions. Among them are:
The definition of scientific data implies that such data is only that which is collected in active experimentation or observation. This ignores the increasing amount of scientific research that does not come from experiments, but rather is derived from real-world measurements of health and disease. This includes everything from data routinely collected by mobile or wearable devices to social media to the electronic health record (EHR). A growing amount of research analyzes and makes inferences using such data.
It could be argued that this sort of data derived “from the wild” should adhere to the provisions above. However, this data is also highly personal and usually highly private. Would you or I want our raw EHR in a data repository? Perhaps connected to our genome data? But if such data are not accessible at all, then the chances for reproducibility are slim.
There is also another twist on this, which concerns data used for informatics research. In a good deal of informatics research, such as the patient cohort retrieval work I do in my own research [1], we use raw, identifiable EHR data. We then proceed to evaluate the performance of our systems and algorithms with this data. Obviously we want this research to be reproducible as well.
There are solutions to these problems, such as Evaluation as a Service [2] approaches that protect such data and allow researchers to send their systems to the data in walled-off containers and receive aggregate results. Maybe the approach in this instance would be to maintain encrypted snapshots of the data that could be unencrypted in highly controlled circumstances.
In any case, the NIH Data Management and Sharing Policy for NIH Funded or Supported Research is a great starting point but should take a broader view of scientific data and develop policies to insure research is reproducible. Research done with data that does not originate as scientific data should be accounted for, including when that data is used for informatics research.
References
1. Wu, S, Liu, S, et al. (2017). Intra-institutional EHR collections for patient-level information retrieval. Journal of the American Society for Information Science & Technology. 68: 2636-2648.
2. Hanbury, A, Müller, H, et al. (2015). Evaluation-as-a-service: overview and outlook. arXiv.org: arXiv:1512.07454. https://arxiv.org/abs/1512.07454.
In October, the NIH released another RFI, this one entitled, Proposed Provisions for a Draft Data Management and Sharing Policy for NIH Funded or Supported Research. Similar to the Strategic Plan for Data Science, most of what is in this draft plan is reasonable in my opinion. But what concerns me more is, similar to the earlier RFI, what is left out.
My main concerns have to do with the definition and use of “scientific data.” Early on, the plan defines “scientific data” as “the recorded factual material commonly accepted in the scientific community as necessary to validate and replicate research findings including, but not limited to, data used to support scholarly publications.” The draft further notes that “scientific data do not include laboratory notebooks, preliminary analyses, completed case report forms, drafts of scientific papers, plans for future research, peer reviews, communications with colleagues, or physical objects, such as laboratory specimens. For the purposes of a possible Policy, scientific data may include certain individual-level and summary or aggregate data, as well as metadata. NIH expects that reasonable efforts should be made to digitize all scientific data.”
The draft report then runs through the various provisions. Among them are:
- Data Management and Sharing Plans - new requirements to make sure data is FAIR (findable, accessible, interoperable, and reusable)
- Related Tools, Software and/or Code - documentation of all the tools used to analyze the data, with a preference toward open-source software (or documentation of reasons why open-source software is not used)
- Standards - what standards, including data formats, data identifiers, definitions, and other data documentation, are employed
- Data Preservation and Access - processes and descriptions for how data is preserved and made available for access
- Timelines - for access, including whether any is held back to allow publication(s) by those who collect it
- Data Sharing Agreements, Licensing, and Intellectual Property - which of these are used and how so
The definition of scientific data implies that such data is only that which is collected in active experimentation or observation. This ignores the increasing amount of scientific research that does not come from experiments, but rather is derived from real-world measurements of health and disease. This includes everything from data routinely collected by mobile or wearable devices to social media to the electronic health record (EHR). A growing amount of research analyzes and makes inferences using such data.
It could be argued that this sort of data derived “from the wild” should adhere to the provisions above. However, this data is also highly personal and usually highly private. Would you or I want our raw EHR in a data repository? Perhaps connected to our genome data? But if such data are not accessible at all, then the chances for reproducibility are slim.
There is also another twist on this, which concerns data used for informatics research. In a good deal of informatics research, such as the patient cohort retrieval work I do in my own research [1], we use raw, identifiable EHR data. We then proceed to evaluate the performance of our systems and algorithms with this data. Obviously we want this research to be reproducible as well.
There are solutions to these problems, such as Evaluation as a Service [2] approaches that protect such data and allow researchers to send their systems to the data in walled-off containers and receive aggregate results. Maybe the approach in this instance would be to maintain encrypted snapshots of the data that could be unencrypted in highly controlled circumstances.
In any case, the NIH Data Management and Sharing Policy for NIH Funded or Supported Research is a great starting point but should take a broader view of scientific data and develop policies to insure research is reproducible. Research done with data that does not originate as scientific data should be accounted for, including when that data is used for informatics research.
References
1. Wu, S, Liu, S, et al. (2017). Intra-institutional EHR collections for patient-level information retrieval. Journal of the American Society for Information Science & Technology. 68: 2636-2648.
2. Hanbury, A, Müller, H, et al. (2015). Evaluation-as-a-service: overview and outlook. arXiv.org: arXiv:1512.07454. https://arxiv.org/abs/1512.07454.
Tuesday, October 30, 2018
A Great Time to be an Academic Informatician
My recent posting describing my updated study of the health IT workforce shows that this is a great time to work in operational health IT and informatics settings. Many of us, however, work as faculty or in other professional roles in academic health science centers, a smaller but critically important part of the informatics workforce. What are the prospects for those in academic informatics?
I would argue they are excellent. There are great opportunities now both for those who follow the traditional academic researcher/educator pathway as well as for those who focus their involvement in the more operational activities in academic health science centers.
For those following the more conventional faculty pathway, the grant funding situation is currently pretty good. While the main supporter of basic informatics research, the National Library of Medicine (NLM), has a small research budget, it has grown 14% with the increased federal funding to the National Institutes of Health (NIH) in the last couple years. Fortunately, informatics researchers have more options. Despite attempts in some political quarters to de-fund the Agency for Healthcare Research & Quality (AHRQ), the agency continues to pursue and fund its research objectives, a decent portion of which involves informatics innovation. Likewise, the other institutes of the NIH, including those that are disease-oriented, offer opportunities for research that includes informatics activities. This includes not only the big initiatives, such as the AllOfUs project, but day-to-day work with others, such as the National Sleep Research Resource. There are also research funding opportunities from foundations, industry, and others.
Of course, one fortunate aspect of being academic informatics faculty is that activities are not limited to those focusing mainly on research. There are other opportunities in teaching (including beyond those studying informatics, such as healthcare professional students) and operational work (supporting and innovating in all of the missions of academic medical centers, which include clinical care, research, and education). Academic informaticians are often involved implementation of operational systems, especially those supporting healthcare delivery and research. Given the growth of informatics and data science, there are likely to be teaching opportunities for those of us who enjoy teaching our area of expertise to clinicians and others who work in healthcare.
For all of these reasons, I am pretty bullish on careers in academic informatics. While no career pathway in any field is a guarantee of success these days, there are plenty of opportunities for those seeking academic careers in informatics.
I would argue they are excellent. There are great opportunities now both for those who follow the traditional academic researcher/educator pathway as well as for those who focus their involvement in the more operational activities in academic health science centers.
For those following the more conventional faculty pathway, the grant funding situation is currently pretty good. While the main supporter of basic informatics research, the National Library of Medicine (NLM), has a small research budget, it has grown 14% with the increased federal funding to the National Institutes of Health (NIH) in the last couple years. Fortunately, informatics researchers have more options. Despite attempts in some political quarters to de-fund the Agency for Healthcare Research & Quality (AHRQ), the agency continues to pursue and fund its research objectives, a decent portion of which involves informatics innovation. Likewise, the other institutes of the NIH, including those that are disease-oriented, offer opportunities for research that includes informatics activities. This includes not only the big initiatives, such as the AllOfUs project, but day-to-day work with others, such as the National Sleep Research Resource. There are also research funding opportunities from foundations, industry, and others.
Of course, one fortunate aspect of being academic informatics faculty is that activities are not limited to those focusing mainly on research. There are other opportunities in teaching (including beyond those studying informatics, such as healthcare professional students) and operational work (supporting and innovating in all of the missions of academic medical centers, which include clinical care, research, and education). Academic informaticians are often involved implementation of operational systems, especially those supporting healthcare delivery and research. Given the growth of informatics and data science, there are likely to be teaching opportunities for those of us who enjoy teaching our area of expertise to clinicians and others who work in healthcare.
For all of these reasons, I am pretty bullish on careers in academic informatics. While no career pathway in any field is a guarantee of success these days, there are plenty of opportunities for those seeking academic careers in informatics.
Friday, October 12, 2018
What are the Optimal Data Science and Machine Learning Competencies for Informatics Professionals?
Exactly 20 years ago, I organized a panel at the American Medical Informatics Association (AMIA) Annual Symposium that attracted so large an audience that the crowd spilled out of the room into the hallway. Entitled, What are the Optimal Computer Science Competencies for Medical Informatics Professionals?, the panel asked how much knowledge and skills in computer science were required to work professionally in informatics. In the early days of informatics, most informaticians had some programming skills and often contributed to the development of home-grown systems. Some educational programs, such as the one at Stanford University, had required courses in assembly language. (I took an assembler course myself during my informatics fellowship in the late 1980s.)
But as academic informatics systems grew in scope and complexity, they needed more engineering and hardening as they became mission-critical to organizations. At the same time, there was a recognized need for attention to people and organizational issues, especially in complex adaptive settings such as hospitals. Over time, most professional work in informatics has shifted from system building to implementing commercial systems.
With these changes, my evolving view has been that although few informatics professionals perform major computer programming, there is still value to understanding the concepts and thought process of computer science. While plenty of students enter our graduate program at Oregon Health & Science University with programming skills, our program will not turn those without programming skills into seasoned programmers. But I still believe it is important for all informatics professionals to understand the science of computing, even at the present time. This includes some programming to see computing concepts in action.
A couple decades later, I find myself asking a related question, which is, how much data science and machine learning is required of modern informatics professionals? Clearly data science, machine learning, artificial intelligence, etc. are very prominent now in the evolution healthcare and biomedical science. But not everyone needs to be a "deep diver" into data science and machine learning. I often point this out by referring to the data analytics workforce reports from a few years ago that note the need for a five- to ten-fold ring of people who identify the needs, put into practice, and communicate the results of the deep divers [1, 2]. I also note the observation of data analytics thought leader Tom Davenport, who has written the importance of the roles of "light quants" or "analytical translators" in data-driven organizations (such as healthcare)[3].
Thus to answer my question in the title of this post, competence in data science and machine learning may be analogous to the answer to the computer science question of a couple decades ago. Clearly, every informatician must have basic data science skills. These include knowing how to gather, wrangle, and carry out basic analysis of data. They should understand the different approaches to machine learning, even if they do not necessarily understand all of their deep mathematics. And of course they must critically know how to apply data science and machine learning in their everyday professional practice of informatics.
References
1. Manyika, J, Chui, M, et al. (2011). Big data: The next frontier for innovation, competition, and productivity, McKinsey Global Institute. http://www.mckinsey.com/insights/business_technology/big_data_the_next_frontier_for_innovation.
2. Anonymous (2014). IDC Reveals Worldwide Big Data and Analytics Predictions for 2015. Framingham, MA, International Data Corporation. http://bit.ly/IDCBigDataFutureScape2015.
3. Davenport, T (2015). In praise of “light quants” and “analytical translators”. Deloitte Insights. https://www2.deloitte.com/insights/us/en/topics/analytics/new-big-data-analytics-skills.html.
But as academic informatics systems grew in scope and complexity, they needed more engineering and hardening as they became mission-critical to organizations. At the same time, there was a recognized need for attention to people and organizational issues, especially in complex adaptive settings such as hospitals. Over time, most professional work in informatics has shifted from system building to implementing commercial systems.
With these changes, my evolving view has been that although few informatics professionals perform major computer programming, there is still value to understanding the concepts and thought process of computer science. While plenty of students enter our graduate program at Oregon Health & Science University with programming skills, our program will not turn those without programming skills into seasoned programmers. But I still believe it is important for all informatics professionals to understand the science of computing, even at the present time. This includes some programming to see computing concepts in action.
A couple decades later, I find myself asking a related question, which is, how much data science and machine learning is required of modern informatics professionals? Clearly data science, machine learning, artificial intelligence, etc. are very prominent now in the evolution healthcare and biomedical science. But not everyone needs to be a "deep diver" into data science and machine learning. I often point this out by referring to the data analytics workforce reports from a few years ago that note the need for a five- to ten-fold ring of people who identify the needs, put into practice, and communicate the results of the deep divers [1, 2]. I also note the observation of data analytics thought leader Tom Davenport, who has written the importance of the roles of "light quants" or "analytical translators" in data-driven organizations (such as healthcare)[3].
Thus to answer my question in the title of this post, competence in data science and machine learning may be analogous to the answer to the computer science question of a couple decades ago. Clearly, every informatician must have basic data science skills. These include knowing how to gather, wrangle, and carry out basic analysis of data. They should understand the different approaches to machine learning, even if they do not necessarily understand all of their deep mathematics. And of course they must critically know how to apply data science and machine learning in their everyday professional practice of informatics.
References
1. Manyika, J, Chui, M, et al. (2011). Big data: The next frontier for innovation, competition, and productivity, McKinsey Global Institute. http://www.mckinsey.com/insights/business_technology/big_data_the_next_frontier_for_innovation.
2. Anonymous (2014). IDC Reveals Worldwide Big Data and Analytics Predictions for 2015. Framingham, MA, International Data Corporation. http://bit.ly/IDCBigDataFutureScape2015.
3. Davenport, T (2015). In praise of “light quants” and “analytical translators”. Deloitte Insights. https://www2.deloitte.com/insights/us/en/topics/analytics/new-big-data-analytics-skills.html.
Tuesday, October 9, 2018
A Meaningful End to “Meaningful Use?”
The era of meaningful use came to a relatively quiet end this summer with the release of the Final Inpatient Prospective Payment Systems rule by the Center for Medicare and Medicaid Systems (CMS) this past August. The rule put into place most of what had been in the proposed rule earlier in the year. Although the rule has much detail on what healthcare organizations must achieve to receive incentive payments and/or avoid penalties, a large symbolic change is the renaming of the Medicare and Medicaid Electronic Health Record (EHR) Incentive Programs now be called Promoting Interoperability Programs. The "meaningful use" moniker goes away, although under the new program, eligible professionals and hospitals still must demonstrate they are "meaningful users" of health information technology.
As someone who had a front-row seat in meaningful use and how it impacted the informatics world (in my case more teaching about it than being in the trenches implementing it), it is the end of an era that brought our field to national visibility. There is some success to be celebrated by the fact that 96% of hospitals and 85% of office-based clinicians have adopted some form of EHR. Overall, the new rules seem logical and fair, although some would argue that incentive payments should be based more on outcomes than process measures. In any case, there is still important work ahead as we step up to challenge to making EHR systems better and leveraging the data in them to truly benefit health and healthcare.
Unlike in the past, when summaries of the updates were released with great fanfare by multiple sources, there are few summaries of the new rule that provide enough content to understand the details without having to read the hundreds of pages in the government publication. Two good sources I have found are:
The new CMS rule now applies a similar approach to eligible hospitals. The new rule groups Promoting Interoperability into four overall objectives, each of which has one or more measures and a maximum number of points for achieving them. The new rule also streamlines some of the quality reporting measures required by the program as well as limits the reporting period to one quarter of the year.
A final change in the new rule is the requirement that systems use the 2015 Edition Certified EHR Technology (CEHRT) criteria to be eligible for the program. One key requirement of the 2015 CEHRT edition is the implementation of an application programming interface (API) that can (with appropriate authentication and security) access data directly in the EHR. Most vendors are implementing this capability using the emerging Fast Healthcare Interoperability Resources (FHIR) standard. Probably the best-known (but certainly not the only) application of this is the Apple Health app that allows patients to download the so-called Argonaut data set of 21 data elements.
The new Promoting Interoperability measures include:
1. e-Prescribing (1 required, 2 optional measures in 2019 that will be required in 2020)
https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/
It is hard not to wax somewhat nostalgic about these changes, especially in this blog that started about the time of the introduction of the Health Information Technology for Clinical and Economic Health (HITECH) Act that seems like eons ago. Although the goal was not just to put computers into hospitals and clinicians’ offices, that is an accomplishment and hopefully lays the foundation for improving healthcare and leveraging data going forward.
As someone who had a front-row seat in meaningful use and how it impacted the informatics world (in my case more teaching about it than being in the trenches implementing it), it is the end of an era that brought our field to national visibility. There is some success to be celebrated by the fact that 96% of hospitals and 85% of office-based clinicians have adopted some form of EHR. Overall, the new rules seem logical and fair, although some would argue that incentive payments should be based more on outcomes than process measures. In any case, there is still important work ahead as we step up to challenge to making EHR systems better and leveraging the data in them to truly benefit health and healthcare.
Unlike in the past, when summaries of the updates were released with great fanfare by multiple sources, there are few summaries of the new rule that provide enough content to understand the details without having to read the hundreds of pages in the government publication. Two good sources I have found are:
- AMIA - https://www.amia.org/sites/default/files/FY19-IPPS-Final-Rule-Detailed-Summary.pdf
- Healthcelerate - https://www.healthcelerate.com/s/Healthcelerate-Promoting-Interoperability-Guide_August-2018.pdf
The new CMS rule now applies a similar approach to eligible hospitals. The new rule groups Promoting Interoperability into four overall objectives, each of which has one or more measures and a maximum number of points for achieving them. The new rule also streamlines some of the quality reporting measures required by the program as well as limits the reporting period to one quarter of the year.
A final change in the new rule is the requirement that systems use the 2015 Edition Certified EHR Technology (CEHRT) criteria to be eligible for the program. One key requirement of the 2015 CEHRT edition is the implementation of an application programming interface (API) that can (with appropriate authentication and security) access data directly in the EHR. Most vendors are implementing this capability using the emerging Fast Healthcare Interoperability Resources (FHIR) standard. Probably the best-known (but certainly not the only) application of this is the Apple Health app that allows patients to download the so-called Argonaut data set of 21 data elements.
The new Promoting Interoperability measures include:
1. e-Prescribing (1 required, 2 optional measures in 2019 that will be required in 2020)
- e-Prescribing
- Query of Prescription Drug Monitoring Program (PDMP)
- Verify Opioid Treatment Agreement
- Support Electronic Referral Loops by Sending Health Information
- Support Electronic Referral Loops by Receiving and Incorporating Health Information
- Provide Patients Electronic Access to Their Health Information
- Syndromic Surveillance Reporting
- Immunization Registry Reporting
- Electronic Case Reporting
- Public Health Registry Reporting
- Clinical Data Registry Reporting
- Electronic Reportable Laboratory Result Reporting
https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/
It is hard not to wax somewhat nostalgic about these changes, especially in this blog that started about the time of the introduction of the Health Information Technology for Clinical and Economic Health (HITECH) Act that seems like eons ago. Although the goal was not just to put computers into hospitals and clinicians’ offices, that is an accomplishment and hopefully lays the foundation for improving healthcare and leveraging data going forward.
Wednesday, September 12, 2018
Artificial Intelligence in Medicine: 21st Century Resurgence
I first entered the informatics field in the late 1980s, at the tail end of the first era of artificial intelligence (AI) in medicine. Initial systems focused on making medical diagnoses using symbolic processing, which was appropriate for a time of relatively little digital data, both for individual patients and healthcare as whole, and underpowered hardware. Systems like MYCIN [1], INTERNIST-1/QMR [2], and DXPLAIN [3] provided relatively accurate diagnostic performance, but were slow and difficult to use. They also provided a single likely diagnosis, which was not really what clinicians needed. Because of these shortcomings, they never achieved significant real-world adoption, and their "Greek Oracle" style of approach was abandoned. [4]. There was also some early enthusiasm for neural networks around that time [5], although in retrospect those systems were hampered by lack of data and computing power.
Into the 1990s, informatics moved on to other areas, such as information retrieval (search) from the newly evolving World Wide Web and more focused (rule-based) decision support. At the start of the new century, I started to wonder whether I should still even cover those early AI systems in my well-known introductory informatics course. I kept them included, mainly out of a sense of historical perspective, since those systems were a major focus of work in the field in its early days. However, the term "AI" almost seemed to disappear from informatics jargon.
In recent years, however, AI in medicine (and beyond) has re-emerged. Driven by much larger quantities of data (through electronic health records, curated data sets - mainly images, and personal tracking devices) and much more powerful hardware (mainly networked clusters of low-cost computers and hard disks as well as mobile devices), there has been a resurgence of AI, although with a somewhat different focus from the original era. There has also been a maturing of machine learning techniques, most prominently neural networks applied in complex formats known as deep learning [6, 7].
The most success for use of deep learning has come in image processing. The well-known researcher and author Dr. Eric Topol keeps an ever-growing list of systems for diagnosis and their comparison with humans (to which I have contributed a few, and to which I add studies that have only been published as preprints on bioArXiv.org):
The success of these systems and the technology underlying them are exciting, but I also would tell any thoughtful radiologist (or pathologist, dermatologist, or ophthalmologist) not to fear for his or her livelihood. Yes these tools will change practice, maybe sooner than we realize. However, I always think that high-tech medicine of the future will look like how it is used the doctors of Star Trek. Yes, those physicians have immense technology at their disposal, not only for diagnosis but also for treatment. But those tools do not remove the human element of caring for people. Explaining to patients their disease process, describing the prognosis as we know it, and shared decision-making among the diagnostic and treatment options are all important in applying advanced technology is medicine.
I also recognize we have a ways to go before this technology truly changes medicine. For several years running, I have expressed both my intellectual excitement at predictive data science while also noting that prediction is not enough, and we must demonstrate that what is predicted must be demonstrated to be able to be applied to improve the delivery of care and patient health.
This notion is best elaborated by some discussion of another deep learning paper focused on a non-image domain, namely the prediction of in-hospital mortality, 30-day unplanned readmission, prolonged length of stay, and the entirety of a patient’s final diagnoses [33]. The paper demonstrates the value of deep learning, the application of Fast Healthcare Interoperability Resources (FHIR) for data points, and efforts for the neural network to explain itself along its processing path. I do not doubt the veracity of what the authors have accomplished. Clearly, deep learning techniques will play a significant role as described above. These methods scale with large quantities of data and will likely improve over time with even better algorithms and better data.
But taking off my computer science hat and replacing it with my informatics one, I have a couple of concerns. My first and major concern is whether this prediction can be turned into information that can improve patient outcomes. Just because we can predict mortality or prolonged length of stay, does that mean we can do anything about it? Second, while there is value to predicting across the entire population of patients, it would be interesting to focus in on patients we know are more likely to need closer attention. Can we focus in and intervene for those patients who matter?
Dr. Topol recently co-authored an accompanying editorial describing a study that adheres to the kind of methods that are truly needed to evaluate modern AI in clinical settings [34]. The study itself is to be commended; it actually tests an application of an AI system for detection of diabetic retinopathy in primary care settings [35]. The system worked effectively, though it was not flawless, and other issues common to real-world medicine emerged, such as some patients being non-imageable and others having different eye diseases. Nonetheless, I agree with Dr. Topol that this study sets the bar for how AI needs to be evaluated before its widespread adoption in routine clinical practice.
All of this AI in medicine research is impressive. But its advocates will need to continue the perhaps more mundane research of how we make this data actionable and actually act on it in ways that improve patient outcomes. I personally find that kind of research more interesting and exciting anyways.
References
1. Miller, RA (2010). A history of the INTERNIST-1 and Quick Medical Reference (QMR) computer-assisted diagnosis projects, with lessons learned. Yearbook of Medical Informatics. Stuttgart, Germany: 121-136.
2. Shortliffe, EH, Davis, R, et al. (1975). Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Computers and Biomedical Research. 8: 303-320.
3. Barnett, GO, Cimino, JJ, et al. (1987). DXplain: an evolving diagnostic decision-support system. Journal of the American Medical Association. 258: 67-74.
4. Miller, RA and Masarie, FE (1990). The demise of the "Greek Oracle" model for medical diagnostic systems. Methods of Information in Medicine. 29: 1-2.
5. Rumelhart, DE and McClelland, JL (1986). Parallel Distributed Processing: Foundations. Cambridge, MA, MIT Press.
6. Alpaydin, E (2016). Machine Learning: The New AI. Cambridge, MA, MIT Press.
7. Kelleher, JD and Tierney, B (2018). Data Science. Cambridge, MA, MIT Press.
Into the 1990s, informatics moved on to other areas, such as information retrieval (search) from the newly evolving World Wide Web and more focused (rule-based) decision support. At the start of the new century, I started to wonder whether I should still even cover those early AI systems in my well-known introductory informatics course. I kept them included, mainly out of a sense of historical perspective, since those systems were a major focus of work in the field in its early days. However, the term "AI" almost seemed to disappear from informatics jargon.
In recent years, however, AI in medicine (and beyond) has re-emerged. Driven by much larger quantities of data (through electronic health records, curated data sets - mainly images, and personal tracking devices) and much more powerful hardware (mainly networked clusters of low-cost computers and hard disks as well as mobile devices), there has been a resurgence of AI, although with a somewhat different focus from the original era. There has also been a maturing of machine learning techniques, most prominently neural networks applied in complex formats known as deep learning [6, 7].
The most success for use of deep learning has come in image processing. The well-known researcher and author Dr. Eric Topol keeps an ever-growing list of systems for diagnosis and their comparison with humans (to which I have contributed a few, and to which I add studies that have only been published as preprints on bioArXiv.org):
- Radiology - diagnosis comparable to radiologists for pneumonia [8] tuberculosis [9], intracranial hemorrhage [10]
- Dermatology - detecting skin cancer from images [11-13]
- Ophthalmology - detecting diabetic retinopathy from fundal images [14-15], predicting cardiovascular risk factors from retinal fundus photographs [16]; diagnosis of congenital cataract [17], age-related macular degeneration [18], plus disease [19]; and diagnoses of retinal diseases [20] and macular diseases [21]
- Pathology - classifying various forms of cancer from histopathology images [22-25], detecting lymph node metastases [26]
- Cardiology - cardiac arrhythmia detection comparable to cardiologists [27] and classification of views in echocardiography [28]
- Gastroenterology - endocytoscope images for diagnose-and-leave strategy for diminutive, nonneoplastic, rectosigmoid polyps [29]
The success of these systems and the technology underlying them are exciting, but I also would tell any thoughtful radiologist (or pathologist, dermatologist, or ophthalmologist) not to fear for his or her livelihood. Yes these tools will change practice, maybe sooner than we realize. However, I always think that high-tech medicine of the future will look like how it is used the doctors of Star Trek. Yes, those physicians have immense technology at their disposal, not only for diagnosis but also for treatment. But those tools do not remove the human element of caring for people. Explaining to patients their disease process, describing the prognosis as we know it, and shared decision-making among the diagnostic and treatment options are all important in applying advanced technology is medicine.
I also recognize we have a ways to go before this technology truly changes medicine. For several years running, I have expressed both my intellectual excitement at predictive data science while also noting that prediction is not enough, and we must demonstrate that what is predicted must be demonstrated to be able to be applied to improve the delivery of care and patient health.
This notion is best elaborated by some discussion of another deep learning paper focused on a non-image domain, namely the prediction of in-hospital mortality, 30-day unplanned readmission, prolonged length of stay, and the entirety of a patient’s final diagnoses [33]. The paper demonstrates the value of deep learning, the application of Fast Healthcare Interoperability Resources (FHIR) for data points, and efforts for the neural network to explain itself along its processing path. I do not doubt the veracity of what the authors have accomplished. Clearly, deep learning techniques will play a significant role as described above. These methods scale with large quantities of data and will likely improve over time with even better algorithms and better data.
But taking off my computer science hat and replacing it with my informatics one, I have a couple of concerns. My first and major concern is whether this prediction can be turned into information that can improve patient outcomes. Just because we can predict mortality or prolonged length of stay, does that mean we can do anything about it? Second, while there is value to predicting across the entire population of patients, it would be interesting to focus in on patients we know are more likely to need closer attention. Can we focus in and intervene for those patients who matter?
Dr. Topol recently co-authored an accompanying editorial describing a study that adheres to the kind of methods that are truly needed to evaluate modern AI in clinical settings [34]. The study itself is to be commended; it actually tests an application of an AI system for detection of diabetic retinopathy in primary care settings [35]. The system worked effectively, though it was not flawless, and other issues common to real-world medicine emerged, such as some patients being non-imageable and others having different eye diseases. Nonetheless, I agree with Dr. Topol that this study sets the bar for how AI needs to be evaluated before its widespread adoption in routine clinical practice.
All of this AI in medicine research is impressive. But its advocates will need to continue the perhaps more mundane research of how we make this data actionable and actually act on it in ways that improve patient outcomes. I personally find that kind of research more interesting and exciting anyways.
References
1. Miller, RA (2010). A history of the INTERNIST-1 and Quick Medical Reference (QMR) computer-assisted diagnosis projects, with lessons learned. Yearbook of Medical Informatics. Stuttgart, Germany: 121-136.
2. Shortliffe, EH, Davis, R, et al. (1975). Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Computers and Biomedical Research. 8: 303-320.
3. Barnett, GO, Cimino, JJ, et al. (1987). DXplain: an evolving diagnostic decision-support system. Journal of the American Medical Association. 258: 67-74.
4. Miller, RA and Masarie, FE (1990). The demise of the "Greek Oracle" model for medical diagnostic systems. Methods of Information in Medicine. 29: 1-2.
5. Rumelhart, DE and McClelland, JL (1986). Parallel Distributed Processing: Foundations. Cambridge, MA, MIT Press.
6. Alpaydin, E (2016). Machine Learning: The New AI. Cambridge, MA, MIT Press.
7. Kelleher, JD and Tierney, B (2018). Data Science. Cambridge, MA, MIT Press.
8. Rajpurkar, P, Irvin, J, et al. (2017). CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv.org: arXiv:1711.05225. https://arxiv.org/abs/1711.05225.
9. Lakhani, P and Sundaram, B (2017). Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 284: 574-582.
10. Arbabshirani, MR, Fornwalt, BK, et al. (2018). Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. npj Digital Medicine. 1: 9. https://www.nature.com/articles/s41746-017-0015-z.
11. Esteva, A, Kuprel, B, et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature. 542: 115-118.
16. Poplin, R, Varadarajan, AV, et al. (2017). Predicting cardiovascular risk factors from retinal fundus photographs using deep learning. arXiv.org: arXiv:1708.09843. https://arxiv.org/abs/1708.09843.
17. Long, E, Lin, H, et al. (2017). An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nature Biomedical Engineering. 1: 0024. https://www.nature.com/articles/s41551-016-0024.
18. Burlina, PM, Joshi, N, et al. (2017). Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmology. 135: 1170-1176.
9. Lakhani, P and Sundaram, B (2017). Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 284: 574-582.
10. Arbabshirani, MR, Fornwalt, BK, et al. (2018). Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. npj Digital Medicine. 1: 9. https://www.nature.com/articles/s41746-017-0015-z.
11. Esteva, A, Kuprel, B, et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature. 542: 115-118.
12. Haenssle, HA, Fink, C, et al. (2018). Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of Oncology. 29: 1836-1842.
13. Han, SS, Kim, MS, et al. (2018). Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. Journal of Investigative Dermatology. 138: 1529-1538.
14. Gulshan, V, Peng, L, et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Journal of the American Medical Association. 316: 2402-2410.
15. Ting, DSW, Cheung, CYL, et al. (2017). Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. Journal of the American Medical Association. 318: 2211-2223.13. Han, SS, Kim, MS, et al. (2018). Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. Journal of Investigative Dermatology. 138: 1529-1538.
14. Gulshan, V, Peng, L, et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Journal of the American Medical Association. 316: 2402-2410.
16. Poplin, R, Varadarajan, AV, et al. (2017). Predicting cardiovascular risk factors from retinal fundus photographs using deep learning. arXiv.org: arXiv:1708.09843. https://arxiv.org/abs/1708.09843.
17. Long, E, Lin, H, et al. (2017). An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nature Biomedical Engineering. 1: 0024. https://www.nature.com/articles/s41551-016-0024.
18. Burlina, PM, Joshi, N, et al. (2017). Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmology. 135: 1170-1176.
19. Brown, JM, Campbell, JP, et al. (2018). Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. JAMA Ophthalmology. 136: 803-810.
20. DeFauw, J, Ledsam, JR, et al. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine: Epub ahead of print. https://www.nature.com/articles/s41591-018-0107-6.
21. Kermany, DS, Goldbaum, M, et al. (2018). Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 172: 1122-1131.E1129.
22. Bejnordi, BE, Zuidhof, G, et al. (2017). Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. Journal of Medical Imaging. 4(4): 044504. https://www.spiedigitallibrary.org/journals/journal-of-medical-imaging/volume-4/issue-04/044504/Context-aware-stacked-convolutional-neural-networks-for-classification-of-breast/10.1117/1.JMI.4.4.044504.full?SSO=1.
23. Liu, Y, Gadepalli, K, et al. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv.org: arXiv:1703.02442. https://arxiv.org/abs/1703.02442.
24. Yu, KH, Zhang, C, et al. (2017). Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nature Communications. 7: 12474. https://www.nature.com/articles/ncomms12474.
25. Capper, D, Jones, DTW, et al. (2018). DNA methylation-based classification of central nervous system tumours. Nature. 555: 469–474.
26. Bejnordi, BE, Veta, M, et al. (2017). Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Journal of the American Medical Association. 318: 2199-2210.
27. Rajpurkar, P, Hannun, AY, et al. (2017). Cardiologist-level arrhythmia detection with convolutional neural networks. arXiv.org: arXiv:1707.01836. https://arxiv.org/abs/1707.01836.
28. Madani, A, Arnaout, R, et al. (2018). Fast and accurate view classification of echocardiograms using deep learning. npj Digital Medicine. 1: 6. https://www.nature.com/articles/s41746-017-0013-1.
29. Mori, Y, Kudo, SE, et al. (2018). Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy: a prospective study. Annals of Internal Medicine: Epub ahead of print.
34. Keane, PA and Topol, EJ (2018). With an eye to AI and autonomous diagnosis. npj Digital Medicine. 1: 40. https://www.nature.com/articles/s41746-018-0048-y.
20. DeFauw, J, Ledsam, JR, et al. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine: Epub ahead of print. https://www.nature.com/articles/s41591-018-0107-6.
21. Kermany, DS, Goldbaum, M, et al. (2018). Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 172: 1122-1131.E1129.
22. Bejnordi, BE, Zuidhof, G, et al. (2017). Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. Journal of Medical Imaging. 4(4): 044504. https://www.spiedigitallibrary.org/journals/journal-of-medical-imaging/volume-4/issue-04/044504/Context-aware-stacked-convolutional-neural-networks-for-classification-of-breast/10.1117/1.JMI.4.4.044504.full?SSO=1.
23. Liu, Y, Gadepalli, K, et al. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv.org: arXiv:1703.02442. https://arxiv.org/abs/1703.02442.
24. Yu, KH, Zhang, C, et al. (2017). Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nature Communications. 7: 12474. https://www.nature.com/articles/ncomms12474.
25. Capper, D, Jones, DTW, et al. (2018). DNA methylation-based classification of central nervous system tumours. Nature. 555: 469–474.
26. Bejnordi, BE, Veta, M, et al. (2017). Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Journal of the American Medical Association. 318: 2199-2210.
27. Rajpurkar, P, Hannun, AY, et al. (2017). Cardiologist-level arrhythmia detection with convolutional neural networks. arXiv.org: arXiv:1707.01836. https://arxiv.org/abs/1707.01836.
28. Madani, A, Arnaout, R, et al. (2018). Fast and accurate view classification of echocardiograms using deep learning. npj Digital Medicine. 1: 6. https://www.nature.com/articles/s41746-017-0013-1.
29. Mori, Y, Kudo, SE, et al. (2018). Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy: a prospective study. Annals of Internal Medicine: Epub ahead of print.
30. Hinton, G (2018). Deep learning—a technology with the potential to transform health care. Journal of the American Medical Association: Epub ahead of print.
31. Naylor, CD (2018). On the prospects for a (deep) learning health care system. Journal of the American Medical Association: Epub ahead of print.
32. Stead, WW (2018). Clinical implications and challenges of artificial intelligence and deep learning. Journal of the American Medical Association: Epub ahead of print.
33. Rajkomar, A, Oren, E, et al. (2018). Scalable and accurate deep learning for electronic health records. npj Digital Medicine. 1: 18. https://www.nature.com/articles/s41746-018-0029-133.31. Naylor, CD (2018). On the prospects for a (deep) learning health care system. Journal of the American Medical Association: Epub ahead of print.
32. Stead, WW (2018). Clinical implications and challenges of artificial intelligence and deep learning. Journal of the American Medical Association: Epub ahead of print.
34. Keane, PA and Topol, EJ (2018). With an eye to AI and autonomous diagnosis. npj Digital Medicine. 1: 40. https://www.nature.com/articles/s41746-018-0048-y.
35. Abrà moff, MD, Lavin, PT, et al. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. npj Digital Medicine. 1: 39. https://www.nature.com/articles/s41746-018-0040-6.
Monday, July 30, 2018
Healthcare Information Technology Workforce: Updated Analysis Shows Continued Growth and Opportunity
A new analysis of the healthcare information technology (IT) workforce indicates that as hospitals and health systems continue to adopt electronic health records (EHRs) and other forms of IT, as many as 19,852 to 153,114 more full-time equivalent (FTE) personnel may be required [1]. The new study has been published by myself and colleagues Keith Boone and Annette Totten in the new journal, JAMIA Open. It updates an original analysis [2] from before the passage of the Health Information Technology for Economic and Clinical Health (HITECH) Act, which has led to substantial growth in the adoption of EHRs [3, 4] and this the expansion of the healthcare IT workforce.
The data used in the analysis actually focus only on hospitals and health systems, so informatics/IT workforce growth will also likely occur in other health-related areas. The results remind us that there remain and will likely be growing opportunities for those who train and work in biomedical and health informatics.
The new paper represents an update of a research interest of mine that emerged over a decade ago. As my activities in informatics education were growing at that time, I became interested in the characteristics of the healthcare IT workforce and its professional development. This led me to search for studies of that workforce, which essentially came up empty. There was a single resource I was able to find that provided some data about healthcare IT staffing, the HIMSS Analytics Database, but no one had ever done any analysis of it. The HIMSS Analytics Database mostly focuses on the IT systems that hospitals and health systems implement but also contains some data on IT staffing FTE. The result of the analysis was a paper that garnered a great detail of attention when it was published in 2008 [2], including an invitation to present the results in Washington, DC to the Capitol Hill Steering Committee on Telehealth and Healthcare Informatics.
Based on 2007 data, our initial paper looked at FTE staffing, especially as it related to level of adoption, based on the well-known HIMSS Analytics Electronic Medical Record Adoption Model (EMRAM), a 0-7 scale that measures milestones of EHR adoption. This was, of course, before the HITECH Act, when a much smaller number of hospitals and health systems had adopted EHRs. Also around that time, there had been the publication of the first systematic review of evidence supporting benefit of healthcare IT, showing the value came mainly from use of clinical decision support (CDS) and computerized provider order entry (CPOE) [5]. As such, we looked at the level of healthcare IT staffing by EMRAM stage, with a particular focus on what increase might be required to achieve the level of IT use associated with those evidence-based benefits. We assessed the ratio of IT FTE staff to hospital bed ratio by EMRAM stage.
Because the self-reported data of the database was incomplete for FTE staffing, we had to extrapolate from the data present to the entire country (recognizing a potential bias from those who responded vs. those who did not). We also noted some other limitations of the data, which was that the data represented only hospitals and health systems, and not the entire healthcare system, nor the use of IT outside of the healthcare system. Our analysis found that the national health IT workforce size in 2007 was estimated to be 108,390. But the real sound bite from the study was that if EHR adoption were to increase to the level supported by the evidence, namely EMRAM Stage 4 (use of CDS and CPOE), and FTE/Bed ratios remained the same for those hospitals, the size of the workforce would need to grow to 149,174. In other words, there was a need to increase the size of the healthcare IT workforce by 40,784 people.
Within a year of the study’s publication, the US economy was entering the Great Recession, and the new Obama Administration had taken office. The recession led to Congress passing the HITECH Act (as part of the American Recovery and Reinvestment Act), which allocated about $30 billion in economic stimulus funding to EHR adoption. Recognizing that a larger and better-trained workforce would be necessary to facilitate this EHR adoption, the HITECH Act included $118 million for workforce development. The rationale for this included the data from our study showing the need for expanding the workforce, especially as the meaningful use of EHRs required of HITECH would necessitate the use of CDS and CPOE.
Since that time, EHR adoption has grown substantially, to 96% of hospitals [3] and 87% of office-based physicians and other clinicians [4]. A few years ago, I started to wonder how the widespread adoption impacted the workforce, especially at the higher stages of EMRAM, which very few hospitals had achieved in 2007. By 2014, one-quarter of US hospitals had reached Stages 6 and 7.
The new study reports some interesting findings. First, the FTE/Bed ratios in 2014 for different levels of EMRAM are remarkably similar to those in 2007 (with the exception of Stage 7, which no hospitals had reached in 2007). However, because of the advancing of hospitals to higher EMRAM stages beyond Stage 4, the total workforce ended up being larger than we had estimated to be needed from the 2007 data. Probably most important, as more hospitals continue to reach Stages 6 and 7, the workforce will continue to grow. Our new study estimates that if all hospitals were to achieve Stage 6, an additional 19,852 healthcare IT FTE would be needed. Our analysis also shows an almost explosive growth of 153,114 more FTE if all hospitals moved to Stage 7, although we have less confidence in that result due to the relatively small numbers of hospitals that have achieved this stage at the present time., and it is also unclear whether the leaders reaching Stage 7 early are representative of the rest of hospitals and health systems generally.
Nonetheless, the US healthcare industry is moving toward increased EHR adoption. At the time of the data snapshot we used in the analysis in 2014, there were 3.7% and 22.2% of hospitals at Stages 6 and 7 respectively. The latest EMRAM data from the end of 2017 show those to have increased to 6.4% and 33.8% respectively. In other words, the healthcare industry is moving toward higher levels of adoption that, if our findings hold, will lead to increased healthcare IT hiring.
The new paper also reiterates the caveats of the HIMSS Analytics data. It is a valuable database, but not really designed to measure the workforce or its characteristics in great detail. Another limitation is that only about a third of organizations respond to the staffing FTE questions. In addition, while the hospital setting comprises a large proportion of those who work in the healthcare industry, there are other places where IT and informatics personnel work, including for vendors, research institutions, government, and other health-related entities. As healthcare changes, these latter settings may account for an even larger fraction of the healthcare IT workforce.
Because of these limitations of the data and the changing healthcare environment, the paper calls for additional research and other actions. We note that better data, both more complete and with more detail, is critical to learn more about the workforce. We also lament the decision of the US Bureau of Labor Statistics (BLS) to not add a Standard Occupational Classification (SOC) code for health informatics, which would have added informatics to US labor statistics. Fortunately the American Medical Informatics Association (AMIA) is undertaking a practice analysis of informatics work, so additional information about the workforce will be coming by the end of this year.
It should be noted that some may view the employment growth in healthcare IT as a negative, especially due to its added cost. However, the overall size of this workforce needs to be put in perspective, as it represents just a small fraction of the estimated 12 million Americans who work in the healthcare industry. As the need for data and information to improve operations and innovations in health-related industries grows, a large and well-trained workforce will continue to be critical to contribute toward the triple aim of improved health, improved care, and reduced cost [6]. In addition, and many career opportunities will continue to be available to those who want to join the informatics workforce.
References
1. Hersh, WR, Boone, KW, et al. (2018). Characteristics of the healthcare information technology workforce in the HITECH era: underestimated in size, still growing, and adapting to advanced uses. JAMIA Open. Epub ahead of print. https://doi.org/10.1093/jamiaopen/ooy029. (The data used in the analysis is also available for access at https://doi.org/10.5061/dryad.mv00464.)
2. Hersh, WR and Wright, A (2008). What workforce is needed to implement the health information technology agenda? An analysis from the HIMSS Analytics™ Database. AMIA Annual Symposium Proceedings, Washington, DC. American Medical Informatics Association. 303-307. https://dmice.ohsu.edu/hersh/amia-08-workforce.pdf.
3. Henry, J, Pylypchuk, Y, et al. (2016). Adoption of Electronic Health Record Systems among U.S. Non-Federal Acute Care Hospitals: 2008-2015. Washington, DC, Department of Health and Human Services. http://dashboard.healthit.gov/evaluations/data-briefs/non-federal-acute-care-hospital-ehr-adoption-2008-2015.php.
4. Office of the National Coordinator for Health Information Technology. 'Office-based Physician Electronic Health Record Adoption,' Health IT Quick-Stat #50. http://dashboard.healthit.gov/quickstats/pages/physician-ehr-adoption-trends.php.
5. Chaudhry, B, Wang, J, et al. (2006). Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine. 144: 742-752.
6. Berwick, DM, Nolan, TW, et al. (2008). The triple aim: care, health, and cost. Health Affairs. 27: 759-769.
The data used in the analysis actually focus only on hospitals and health systems, so informatics/IT workforce growth will also likely occur in other health-related areas. The results remind us that there remain and will likely be growing opportunities for those who train and work in biomedical and health informatics.
The new paper represents an update of a research interest of mine that emerged over a decade ago. As my activities in informatics education were growing at that time, I became interested in the characteristics of the healthcare IT workforce and its professional development. This led me to search for studies of that workforce, which essentially came up empty. There was a single resource I was able to find that provided some data about healthcare IT staffing, the HIMSS Analytics Database, but no one had ever done any analysis of it. The HIMSS Analytics Database mostly focuses on the IT systems that hospitals and health systems implement but also contains some data on IT staffing FTE. The result of the analysis was a paper that garnered a great detail of attention when it was published in 2008 [2], including an invitation to present the results in Washington, DC to the Capitol Hill Steering Committee on Telehealth and Healthcare Informatics.
Based on 2007 data, our initial paper looked at FTE staffing, especially as it related to level of adoption, based on the well-known HIMSS Analytics Electronic Medical Record Adoption Model (EMRAM), a 0-7 scale that measures milestones of EHR adoption. This was, of course, before the HITECH Act, when a much smaller number of hospitals and health systems had adopted EHRs. Also around that time, there had been the publication of the first systematic review of evidence supporting benefit of healthcare IT, showing the value came mainly from use of clinical decision support (CDS) and computerized provider order entry (CPOE) [5]. As such, we looked at the level of healthcare IT staffing by EMRAM stage, with a particular focus on what increase might be required to achieve the level of IT use associated with those evidence-based benefits. We assessed the ratio of IT FTE staff to hospital bed ratio by EMRAM stage.
Because the self-reported data of the database was incomplete for FTE staffing, we had to extrapolate from the data present to the entire country (recognizing a potential bias from those who responded vs. those who did not). We also noted some other limitations of the data, which was that the data represented only hospitals and health systems, and not the entire healthcare system, nor the use of IT outside of the healthcare system. Our analysis found that the national health IT workforce size in 2007 was estimated to be 108,390. But the real sound bite from the study was that if EHR adoption were to increase to the level supported by the evidence, namely EMRAM Stage 4 (use of CDS and CPOE), and FTE/Bed ratios remained the same for those hospitals, the size of the workforce would need to grow to 149,174. In other words, there was a need to increase the size of the healthcare IT workforce by 40,784 people.
Within a year of the study’s publication, the US economy was entering the Great Recession, and the new Obama Administration had taken office. The recession led to Congress passing the HITECH Act (as part of the American Recovery and Reinvestment Act), which allocated about $30 billion in economic stimulus funding to EHR adoption. Recognizing that a larger and better-trained workforce would be necessary to facilitate this EHR adoption, the HITECH Act included $118 million for workforce development. The rationale for this included the data from our study showing the need for expanding the workforce, especially as the meaningful use of EHRs required of HITECH would necessitate the use of CDS and CPOE.
Since that time, EHR adoption has grown substantially, to 96% of hospitals [3] and 87% of office-based physicians and other clinicians [4]. A few years ago, I started to wonder how the widespread adoption impacted the workforce, especially at the higher stages of EMRAM, which very few hospitals had achieved in 2007. By 2014, one-quarter of US hospitals had reached Stages 6 and 7.
The new study reports some interesting findings. First, the FTE/Bed ratios in 2014 for different levels of EMRAM are remarkably similar to those in 2007 (with the exception of Stage 7, which no hospitals had reached in 2007). However, because of the advancing of hospitals to higher EMRAM stages beyond Stage 4, the total workforce ended up being larger than we had estimated to be needed from the 2007 data. Probably most important, as more hospitals continue to reach Stages 6 and 7, the workforce will continue to grow. Our new study estimates that if all hospitals were to achieve Stage 6, an additional 19,852 healthcare IT FTE would be needed. Our analysis also shows an almost explosive growth of 153,114 more FTE if all hospitals moved to Stage 7, although we have less confidence in that result due to the relatively small numbers of hospitals that have achieved this stage at the present time., and it is also unclear whether the leaders reaching Stage 7 early are representative of the rest of hospitals and health systems generally.
Nonetheless, the US healthcare industry is moving toward increased EHR adoption. At the time of the data snapshot we used in the analysis in 2014, there were 3.7% and 22.2% of hospitals at Stages 6 and 7 respectively. The latest EMRAM data from the end of 2017 show those to have increased to 6.4% and 33.8% respectively. In other words, the healthcare industry is moving toward higher levels of adoption that, if our findings hold, will lead to increased healthcare IT hiring.
The new paper also reiterates the caveats of the HIMSS Analytics data. It is a valuable database, but not really designed to measure the workforce or its characteristics in great detail. Another limitation is that only about a third of organizations respond to the staffing FTE questions. In addition, while the hospital setting comprises a large proportion of those who work in the healthcare industry, there are other places where IT and informatics personnel work, including for vendors, research institutions, government, and other health-related entities. As healthcare changes, these latter settings may account for an even larger fraction of the healthcare IT workforce.
Because of these limitations of the data and the changing healthcare environment, the paper calls for additional research and other actions. We note that better data, both more complete and with more detail, is critical to learn more about the workforce. We also lament the decision of the US Bureau of Labor Statistics (BLS) to not add a Standard Occupational Classification (SOC) code for health informatics, which would have added informatics to US labor statistics. Fortunately the American Medical Informatics Association (AMIA) is undertaking a practice analysis of informatics work, so additional information about the workforce will be coming by the end of this year.
It should be noted that some may view the employment growth in healthcare IT as a negative, especially due to its added cost. However, the overall size of this workforce needs to be put in perspective, as it represents just a small fraction of the estimated 12 million Americans who work in the healthcare industry. As the need for data and information to improve operations and innovations in health-related industries grows, a large and well-trained workforce will continue to be critical to contribute toward the triple aim of improved health, improved care, and reduced cost [6]. In addition, and many career opportunities will continue to be available to those who want to join the informatics workforce.
References
1. Hersh, WR, Boone, KW, et al. (2018). Characteristics of the healthcare information technology workforce in the HITECH era: underestimated in size, still growing, and adapting to advanced uses. JAMIA Open. Epub ahead of print. https://doi.org/10.1093/jamiaopen/ooy029. (The data used in the analysis is also available for access at https://doi.org/10.5061/dryad.mv00464.)
2. Hersh, WR and Wright, A (2008). What workforce is needed to implement the health information technology agenda? An analysis from the HIMSS Analytics™ Database. AMIA Annual Symposium Proceedings, Washington, DC. American Medical Informatics Association. 303-307. https://dmice.ohsu.edu/hersh/amia-08-workforce.pdf.
3. Henry, J, Pylypchuk, Y, et al. (2016). Adoption of Electronic Health Record Systems among U.S. Non-Federal Acute Care Hospitals: 2008-2015. Washington, DC, Department of Health and Human Services. http://dashboard.healthit.gov/evaluations/data-briefs/non-federal-acute-care-hospital-ehr-adoption-2008-2015.php.
4. Office of the National Coordinator for Health Information Technology. 'Office-based Physician Electronic Health Record Adoption,' Health IT Quick-Stat #50. http://dashboard.healthit.gov/quickstats/pages/physician-ehr-adoption-trends.php.
5. Chaudhry, B, Wang, J, et al. (2006). Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine. 144: 742-752.
6. Berwick, DM, Nolan, TW, et al. (2008). The triple aim: care, health, and cost. Health Affairs. 27: 759-769.
Saturday, June 23, 2018
Predatory Journals and Conferences Preying on a New Researcher
I had an interesting interaction from two parts of my life recently. One part emanated from my work as an informatics and information retrieval researcher, with interests that include the potential for the Internet to increasingly facilitate "open science," in which the methods, results, data, and generated knowledge are all more widely transparent and disseminated. An important part of open science is open-access publishing, an approach to publishing that changes its model to one where the research pays (usually through the grants that support their work) for the cost of publishing, and the resulting paper is made freely available on the Internet.
Unfortunately, there is a down side to open-access publishing, which is the proliferation of so-called predatory journals and conferences. These publications and events typically claim to be prestigious and offer peer review, but in reality they exist mainly to make money by trading on researchers’ need to publish and present their work [1, 2]. In reality, these venues have little if any peer review, as exemplified by cases of scientists submitting obviously bogus research that nonetheless is accepted for publication [3]. One Web site maintains a list of such journals. Another covers the topic exhaustively in the context of scientific fraud and misconduct.
This does not mean that all open-access journals have poor peer review (case in point are the Public Library of Science - PLoS and Biomedical Central - BMC journals). Nor does it mean that plenty of poor science does not make it through peer review in journals from traditional publishers. However, there is probably additional vigilance required when it comes to open-access journals. One cut point for journals I advise for students and others in biomedical sciences whom I mentor is it being included in the MEDLINE database, which has a threshold of peer review and other attributes that journals must reach to be listed (PLoS and BMC journals are in MEDLINE).
The part of my life that this story interacted with is the early research career of my daughter, an MD/MPH student at Oregon Health & Science University (OHSU), who recently had her first journal paper publication, on the heels of a number of poster abstract presentations [4]. Yes, I am a proud father!
But no sooner than her journal paper had been accepted and the ahead-of-print version posted, she started receiving the kinds of emails most of us in research receive on a daily basis, inviting submissions to predatory journals and presentations to similar conferences. She was excited to be invited to an international conference; I had to disappoint her to note the nature of such conferences (and that she would need to pay her way).
This episode shone a new light for me on the daily stream of nuisance emails from predatory journals and conferences that I receive. (One characteristic of these emails that helps me identify them as predatory is that they do not offer, as required under the US CAN-SPAM Act, an option to unsubscribe.) It annoys me that young researchers get exposed to this sort of thing at a more impressionable stage of their careers.
On a related note, now that she is published, my daughter’s next milestone will be to get her first citation, which will of course give her an h-index of 1. I had a tongue-in-cheek discussion with some of my geeky research colleagues as to if I cited her paper, whether it would be a form of scientific nepotism (a teachable moment about the h-index and citations!). However, I am certain she will receive many more legitimate citations as her career develops, so I will let her career grow without any intervention on my part, other than being a supportive parent.
References
1. Moher, D and Moher, E (2016). Stop predatory publishers now: act collaboratively. Annals of Internal Medicine. 164: 616-617.
2. Beall, J (2018). Predatory journals exploit structural weaknesses in scholarly publishing. 4Open. 1.
3. McCool, JH (2017). Opinion: Why I Published in a Predatory Journal. The Scientist.
4. Hersh, AR, Muñoz, LF, et al. (2018). Video compared to conversational contraceptive counseling during labor and maternity hospitalization in Colombia: a randomized trial. Contraception. Epub ahead of print.
Unfortunately, there is a down side to open-access publishing, which is the proliferation of so-called predatory journals and conferences. These publications and events typically claim to be prestigious and offer peer review, but in reality they exist mainly to make money by trading on researchers’ need to publish and present their work [1, 2]. In reality, these venues have little if any peer review, as exemplified by cases of scientists submitting obviously bogus research that nonetheless is accepted for publication [3]. One Web site maintains a list of such journals. Another covers the topic exhaustively in the context of scientific fraud and misconduct.
This does not mean that all open-access journals have poor peer review (case in point are the Public Library of Science - PLoS and Biomedical Central - BMC journals). Nor does it mean that plenty of poor science does not make it through peer review in journals from traditional publishers. However, there is probably additional vigilance required when it comes to open-access journals. One cut point for journals I advise for students and others in biomedical sciences whom I mentor is it being included in the MEDLINE database, which has a threshold of peer review and other attributes that journals must reach to be listed (PLoS and BMC journals are in MEDLINE).
The part of my life that this story interacted with is the early research career of my daughter, an MD/MPH student at Oregon Health & Science University (OHSU), who recently had her first journal paper publication, on the heels of a number of poster abstract presentations [4]. Yes, I am a proud father!
But no sooner than her journal paper had been accepted and the ahead-of-print version posted, she started receiving the kinds of emails most of us in research receive on a daily basis, inviting submissions to predatory journals and presentations to similar conferences. She was excited to be invited to an international conference; I had to disappoint her to note the nature of such conferences (and that she would need to pay her way).
This episode shone a new light for me on the daily stream of nuisance emails from predatory journals and conferences that I receive. (One characteristic of these emails that helps me identify them as predatory is that they do not offer, as required under the US CAN-SPAM Act, an option to unsubscribe.) It annoys me that young researchers get exposed to this sort of thing at a more impressionable stage of their careers.
On a related note, now that she is published, my daughter’s next milestone will be to get her first citation, which will of course give her an h-index of 1. I had a tongue-in-cheek discussion with some of my geeky research colleagues as to if I cited her paper, whether it would be a form of scientific nepotism (a teachable moment about the h-index and citations!). However, I am certain she will receive many more legitimate citations as her career develops, so I will let her career grow without any intervention on my part, other than being a supportive parent.
References
1. Moher, D and Moher, E (2016). Stop predatory publishers now: act collaboratively. Annals of Internal Medicine. 164: 616-617.
2. Beall, J (2018). Predatory journals exploit structural weaknesses in scholarly publishing. 4Open. 1.
3. McCool, JH (2017). Opinion: Why I Published in a Predatory Journal. The Scientist.
4. Hersh, AR, Muñoz, LF, et al. (2018). Video compared to conversational contraceptive counseling during labor and maternity hospitalization in Colombia: a randomized trial. Contraception. Epub ahead of print.
Sunday, June 10, 2018
The EHR Strikes Back!
The last few years have been challenging for the electronic health record (EHR). While the Health Information Technology for Economic and Clinical Health (HITECH) Act succeeded in transitioning the US healthcare system mostly away from paper [1], the resulting electronic systems created a number of new problems [2]. They include diverting attention from patient care, adding to clinician time burdens, and causing outright burnout. Although the underlying problems of quality, safety, and cost of healthcare motivating the use of EHRs still exist, the large-scale adoption of EHRs has yet to solve them in any meaningful way.
I cannot imagine that many would advocate actually returning to paper medical records and fax-based communications. But clearly the new problems introduced by EHRs must be addressed while not losing sight of the original motivations for them. Fortunately, a more nuanced view of the EHR is emerging, and based on some recent happenings I will describe next, it may be said that the EHR is striking back.
The first strike back was an "Ideas and Opinion" piece in the medical journal, Annals of Internal Medicine. Presenting data on note length in the EHR gathered by use of the Epic EHR in different countries, Downing et al. found that the length of notes in the US was substantially longer than those in other countries [3]. The authors contend that this is due the priority of EHR use in the US for billing and other non-direct aspects of clinical care. They suggest that these uses beyond the direct clinical encounter, and not the EHR itself, are the cause for physician dissatisfaction and burnout.
A second strike back is the release of a Harris poll at a Stanford symposium to re-imagine the EHR and make it more useful for physicians. The poll of over 500 primary-care physicians (PCPs) on the EHR showed that these physicians saw value in the EHR but that they also desired substantial improvements.
About two-thirds of these physicians agreed with the statement that EHRs have led to improvement in care (63%) and were somewhat or more satisfied with their current systems (66%). But significant numbers of these PCPs also acknowledged problems:
Their survey also found substantial agreement on what should be fixed immediately versus in the longer term:
The EHR has certainly taken it on the chin of late, deservedly so. But with the foundation that has been laid by HITECH, recognition of the problems being more related to the healthcare system than the EHR per se, and new innovations such as those from Apple and others who devise new methods to do interesting things with the data, we will hopefully find new innovations that address problems in healthcare and enable new applications that improve personal and public health.
References
1. Washington, V, DeSalvo, K, et al. (2017). The HITECH era and the path forward. New England Journal of Medicine. 377: 904-906.
I cannot imagine that many would advocate actually returning to paper medical records and fax-based communications. But clearly the new problems introduced by EHRs must be addressed while not losing sight of the original motivations for them. Fortunately, a more nuanced view of the EHR is emerging, and based on some recent happenings I will describe next, it may be said that the EHR is striking back.
The first strike back was an "Ideas and Opinion" piece in the medical journal, Annals of Internal Medicine. Presenting data on note length in the EHR gathered by use of the Epic EHR in different countries, Downing et al. found that the length of notes in the US was substantially longer than those in other countries [3]. The authors contend that this is due the priority of EHR use in the US for billing and other non-direct aspects of clinical care. They suggest that these uses beyond the direct clinical encounter, and not the EHR itself, are the cause for physician dissatisfaction and burnout.
A second strike back is the release of a Harris poll at a Stanford symposium to re-imagine the EHR and make it more useful for physicians. The poll of over 500 primary-care physicians (PCPs) on the EHR showed that these physicians saw value in the EHR but that they also desired substantial improvements.
About two-thirds of these physicians agreed with the statement that EHRs have led to improvement in care (63%) and were somewhat or more satisfied with their current systems (66%). But significant numbers of these PCPs also acknowledged problems:
- 40% said there are more challenges than benefits with the EHR
- 49% believed that using an EHR detracted from their clinical effectiveness
- 71% stated that EHRs greatly contribute to physician burnout
- 59% agreed that EHRs need a complete overhaul
Their survey also found substantial agreement on what should be fixed immediately versus in the longer term:
- 72% believed that improving the EHR user interfaces could best address EHR challenges in the immediate future
- 67% agreed that solving interoperability deficiencies should be the top priority for EHRs in the next decade
- 43% desired improved predictive analytics to support disease diagnosis, prevention, and population health management
The EHR has certainly taken it on the chin of late, deservedly so. But with the foundation that has been laid by HITECH, recognition of the problems being more related to the healthcare system than the EHR per se, and new innovations such as those from Apple and others who devise new methods to do interesting things with the data, we will hopefully find new innovations that address problems in healthcare and enable new applications that improve personal and public health.
References
1. Washington, V, DeSalvo, K, et al. (2017). The HITECH era and the path forward. New England Journal of Medicine. 377: 904-906.
2. Halamka, JD and Tripathi, M (2017). The HITECH era in retrospect. New England Journal of Medicine. 377: 907-909.
3. Downing, NL, Bates, DW, et al. (2018). Physician burnout in the electronic health record era: are we ignoring the real cause? Annals of Internal Medicine. Epub ahead of print.
Thursday, June 7, 2018
New Edition of Textbook, Health Informatics: Practical Guide
I am pleased to announce that I am Co-Editor of the newly published, Health Informatics: Practical Guide, Seventh Edition. The original editor, Robert Hoyt, MD, asked me to come on as Co-Editor for this edition. I will assume sole editorship starting with the Eighth Edition. Although Bob and his wife Ann Yoshihashi deserve credit for the lion’s share of the painstaking details that books like this require, I am pleased to note that I was also involved in the authorship of eight of the book’s 22 chapters.
Bob and Ann have always used an interesting approach to publishing that has arisen in the Internet era, which is so-called self-publishing. They have used the site Lulu.com, which features print-on-demand as well as electronic versions. Although I mostly prefer electronic books these days, the first picture below shows the smiling Co-Editor with his first paper copy. The second picture below shows the back cover that lists the table of contents of the book.
The book is available for purchase on the Lulu.com Web site in both print and eBook PDF formats. The book will also be made available from the more “traditional” online booksellers, such as Amazon.com. Bob also maintains a Web site for the book that includes a special area for those who use the book as instructors (and can register for a free evaluation copy).
The content of the new book is well-aligned with the well-known introductory biomedical and health informatics course that I teach, which is variably called 10x10 (“ten by ten,” the standalone version) and BMI 510 (one of the initial courses in our graduate program).
The chapters I authored or co-authored include:
The chapters I authored or co-authored include:
- 1) Hoyt, RE, Bernstam, EV, Hersh, WR, Overview of Health Informatics
- 3) Hersh, WR, Hoyt, RE, Computer and Network Architectures
- 5) Hersh, WR, Standards and Interoperability
- 6) Hoyt, RE, Hersh, WR, Health Information Exchange
- 7) Hersh, WR, Healthcare Data Analytics
- 12) Hersh, WR, Gibbons, MC, Shaihk, Y, Hoyt, RE, Consumer Health Informatics
- 14) Hoyt, RE, Hersh, WR, Evidence-Based Medicine and Clinical Practice Guidelines
- 15) Hersh, WR, Information Retrieval from Medical Knowledge Resources
I look forward to getting feedback on the book and suggestions for improvement, especially for the next edition.
Tuesday, May 15, 2018
Kudos for the Informatics Professor - 2018 Update
It has been a while since I have posted one of my periodic kudos for the Informatics Professor, so let me take the opportunity to do so for late 2017 and early 2018.
A blog posting of mine received some unexpected attention. As I always do when responding to a government Request for Information (RFI), I posted comments in my blog that I submitted to the RFI for the NIH draft Data Science plan. My main point was that while the plan was a good start, it needed to have more to achieve the optimal value for data science related to health and research. First, the blog posting was picked up by Politico (about a third of the way down the page). I was then asked by National Library of Medicine (NLM) Director Patricia Brennan to re-write it as a guest posting to the NLM Director’s Blog.
Last month, I took part in the inaugural meeting of the International Academy for Health Sciences Informatics (IAHSI), a new Academy of 121 elected members who are leaders in informatics from around the world. With about 50 others from the Academy, I took part in a day-long meeting that was co-located with Medical Informatics Europe 2018 in Gothenburg, Sweden.
I am also honored to be invited to serve on the Scientific Advisory Board (SAB) of the Pan African Bioinformatics Network for H3Africa (H3ABionet), which provides bioinformatics support for the Human Heredity and Health in Africa Project (H3Africa). I will be attending the next meeting of the SAB in Cape Town, South Africa in July. I have been asked to contribute based on my expertise in clinical informatics.
I also gave some invited international talks, including:
Finally, I have authored a chapter in a newly published book: Rydell RL and Landa HM (eds.), The CMIO Survival Guide: A Handbook for Chief Medical Information Officers and Those Who Hire Them, 2nd Edition, CRC Press, 2018. My chapter is entitled, Education and Professional Development for the CMIO. (Surprise!)
A blog posting of mine received some unexpected attention. As I always do when responding to a government Request for Information (RFI), I posted comments in my blog that I submitted to the RFI for the NIH draft Data Science plan. My main point was that while the plan was a good start, it needed to have more to achieve the optimal value for data science related to health and research. First, the blog posting was picked up by Politico (about a third of the way down the page). I was then asked by National Library of Medicine (NLM) Director Patricia Brennan to re-write it as a guest posting to the NLM Director’s Blog.
Last month, I took part in the inaugural meeting of the International Academy for Health Sciences Informatics (IAHSI), a new Academy of 121 elected members who are leaders in informatics from around the world. With about 50 others from the Academy, I took part in a day-long meeting that was co-located with Medical Informatics Europe 2018 in Gothenburg, Sweden.
I am also honored to be invited to serve on the Scientific Advisory Board (SAB) of the Pan African Bioinformatics Network for H3Africa (H3ABionet), which provides bioinformatics support for the Human Heredity and Health in Africa Project (H3Africa). I will be attending the next meeting of the SAB in Cape Town, South Africa in July. I have been asked to contribute based on my expertise in clinical informatics.
I also gave some invited international talks, including:
- IR Meets EHR: Retrieving Patient Cohorts for Clinical Research Studies - Centre International de Mathématiques et d’Informatique (CIMI), University of Toulouse, Toulouse, FR, February 9, 2018
- Caveats and Recommendations for Re-Use of Large-Scale Operational Electronic Health Record Data - Association for Medical and BioInformatics Singapore (AMBIS) and National Supercomputing Centre (NSCC) Singapore, Singapore, February 23, 2018
Finally, I have authored a chapter in a newly published book: Rydell RL and Landa HM (eds.), The CMIO Survival Guide: A Handbook for Chief Medical Information Officers and Those Who Hire Them, 2nd Edition, CRC Press, 2018. My chapter is entitled, Education and Professional Development for the CMIO. (Surprise!)
Monday, May 7, 2018
Access to Health IT and Data Science Curricular Resources
Over the last decade, I have had the fortunate opportunity to be involved in two efforts to develop widely available curricular resources in health information technology and data science. While these resources are a great foundation for others to use to develop courses and other content in this area, the fact that they were developed with federal grants whose funding has now ended means that they will no longer be updated at their source. They will fortunately continue to be freely available on Web sites, but further development, at least from the source, will not occur for now.
Some might ask, why can’t you update the materials? Updating would be feasible if the materials were just simple textual resources or slides. But these materials contain much more, including narrated lectures, transcripts of those lectures, and packaging that makes them flexible to use. And even if we did just aim to simply update the content, I know from other teaching I do that it takes time and effort, not only the time of content authors, but also of instructional designers, technical support staff, and others who create useful products and packaging.
Nonetheless, the materials themselves will continue to be available, and I will use the rest of this posting to describe what material is available, some history on its development, and where the most recent versions can be found.
The Office of the National Coordinator for Health IT (ONC) Curriculum was developed initially under funding from the Health Information Technology for Economic and Clinical Health (HITECH) Act. Recognizing that adoption and meaningful use of electronic health records (EHRs) would require training a workforce to implement them, the ONC funded a workforce development program that included not only this curriculum development, but also funding for training in both community colleges and universities. The final version of the initial curriculum, completed in 2013, was posted on the Web site of the American Medical Informatics Association (AMIA).
In 2015, the ONC found additional funding to update the curriculum and add some new content around health IT and value-based care. The funding also included the development of short training courses, such as the Healthcare Data Analytics course that we have since developed into a standalone course that offers continuing education credit for physicians and nurses. The final curriculum itself is now available for downloading from the ONC Web site.
It should be noted that while these materials are freely available to anyone, the audience for them is focused on educators. The curriculum consists, in ONC jargon, of components, each comparable in quantity to a college-level course. In other words, the curriculum is an extensive resource that can be enhanced by those who develop and maintain courses. Self-directed learners can certainly make use of the materials, and are not discouraged from doing so, but their volume and breadth would make it challenging to design an appropriate course of study. But an experienced education should be able to adapt them appropriately.
The second resource that was developed, but for which funding has ended as well, is the OHSU Big Data to Knowledge (BD2K) Open Educational Resources (OERs) Project. The development of these materials was funded under a grant from the National Institutes of Health (NIH) BD2K Program. Like the ONC curriculum, these materials are freely available for others to use and enhance. They can be viewed on the project Web site or downloaded as source materials from a GitHub repository. While they are not quite as exhaustive as the ONC components, these modules are more manageable for self-directed learners. The Web site for these materials provides a number of examples of their use, including their being mapped to the biomedical informatics competencies of the NIH Clinical And Translational Science Awards (CTSA) Program.
One limitation to both sets of these materials is that they are not able to incorporate any copyrighted material from any other source. While those of us who teach in universities that subscribe to journals and other resources are able to use portions of such content, password-protected in learning management systems, under fair use rules, putting copyrighted material out in the public domain is not allowed. This is another role of the educator or other content expert, to make appropriate use of copyrighted matter.
The components of the ONC Health IT Curriculum consist of the following:
Some might ask, why can’t you update the materials? Updating would be feasible if the materials were just simple textual resources or slides. But these materials contain much more, including narrated lectures, transcripts of those lectures, and packaging that makes them flexible to use. And even if we did just aim to simply update the content, I know from other teaching I do that it takes time and effort, not only the time of content authors, but also of instructional designers, technical support staff, and others who create useful products and packaging.
Nonetheless, the materials themselves will continue to be available, and I will use the rest of this posting to describe what material is available, some history on its development, and where the most recent versions can be found.
The Office of the National Coordinator for Health IT (ONC) Curriculum was developed initially under funding from the Health Information Technology for Economic and Clinical Health (HITECH) Act. Recognizing that adoption and meaningful use of electronic health records (EHRs) would require training a workforce to implement them, the ONC funded a workforce development program that included not only this curriculum development, but also funding for training in both community colleges and universities. The final version of the initial curriculum, completed in 2013, was posted on the Web site of the American Medical Informatics Association (AMIA).
In 2015, the ONC found additional funding to update the curriculum and add some new content around health IT and value-based care. The funding also included the development of short training courses, such as the Healthcare Data Analytics course that we have since developed into a standalone course that offers continuing education credit for physicians and nurses. The final curriculum itself is now available for downloading from the ONC Web site.
It should be noted that while these materials are freely available to anyone, the audience for them is focused on educators. The curriculum consists, in ONC jargon, of components, each comparable in quantity to a college-level course. In other words, the curriculum is an extensive resource that can be enhanced by those who develop and maintain courses. Self-directed learners can certainly make use of the materials, and are not discouraged from doing so, but their volume and breadth would make it challenging to design an appropriate course of study. But an experienced education should be able to adapt them appropriately.
The second resource that was developed, but for which funding has ended as well, is the OHSU Big Data to Knowledge (BD2K) Open Educational Resources (OERs) Project. The development of these materials was funded under a grant from the National Institutes of Health (NIH) BD2K Program. Like the ONC curriculum, these materials are freely available for others to use and enhance. They can be viewed on the project Web site or downloaded as source materials from a GitHub repository. While they are not quite as exhaustive as the ONC components, these modules are more manageable for self-directed learners. The Web site for these materials provides a number of examples of their use, including their being mapped to the biomedical informatics competencies of the NIH Clinical And Translational Science Awards (CTSA) Program.
One limitation to both sets of these materials is that they are not able to incorporate any copyrighted material from any other source. While those of us who teach in universities that subscribe to journals and other resources are able to use portions of such content, password-protected in learning management systems, under fair use rules, putting copyrighted material out in the public domain is not allowed. This is another role of the educator or other content expert, to make appropriate use of copyrighted matter.
The components of the ONC Health IT Curriculum consist of the following:
- Introduction to Health Care and Public Health in the U.S.
- The Culture of Health Care
- Terminology in Health Care and Public Health Settings
- Introduction to Information and Computer Science
- History of Health Information Technology in the U.S.
- Health Management Information System
- Working with Health IT Systems
- Installation and Maintenance of Health IT Systems
- Networking and Health Information Exchange
- Health Care Workflow Process Improvement
- Configuring EHRs
- Quality Improvement
- Public Health IT
- Special Topics Course on Vendor-Specific Systems
- Usability and Human Factors
- Professionalism/Customer Service in the Health Environment
- Working in Teams
- Planning, Management and Leadership for Health IT
- Introduction to Project Management
- Training and Instructional Design
- Population Health
- Care Coordination and Interoperable Health IT Systems
- Value-Based Care
- Health Care Data Analytics
- Patient-Centered Care
- BDK01 Biomedical Big Data Science
- BDK02 Introduction To Big Data In Biology And Medicine
- BDK03 Ethical Issues In Use Of Big Data
- BDK04 Clinical Data Standards Related To Big Data
- BDK05 Basic Research Data Standards
- BDK06 Public Health And Big Data
- BDK07 Team Science
- BDK08 Secondary Use (Reuse) Of Clinical Data
- BDK09 Publication And Peer Review
- BDK10 Information Retrieval
- BDK11 Identifiers
- BDK12 Data Annotation And Curation
- BDK13 Learn FHIR
- BDK14 Ontologies 101
- BDK15 Data Metadata And Provenance
- BDK16 Semantic Data Interoperability
- BDK17 Choice Of Algorithms And Algorithm Dynamics
- BDK18 Visualization And Interpretation
- BDK19 Replication, Validation And The Spectrum Of Reproducibility
- BDK20 Regulatory Issues In Big Data For Genomics And Health
- BDK21 Hosting Data Dissemination And Data Stewardship Workshops
- BDK22 Guidelines For Reporting, Publications, And Data Sharing
Wednesday, April 11, 2018
Marching Again in the March for Science
Last year I gave some thought before deciding to participate in the Portland March for Science. It was not that I am afraid to express my political views, but rather I had some hesitation about politicizing science. In the end, however, I felt compelled to take a stand against what I view as attacks on science by those whose political views with which I also happen to disagree. I was also afraid for science last year, as the new Administration was threatening huge cuts threatened for its funding.
I have no hesitation in deciding to participate again this year. I actually find myself less alarmed about the impact of the current political environment on science than I was a year ago. While some areas of science (e.g., climate change) are a good deal more impacted than those of us in the biomedical and informational sciences, the federal budget for science this year reflects the usual bipartisan support, at least the latter areas. Even though I do have concerns for those want to slash the budget of the Agency for Healthcare Research and Quality (AHRQ), the National Institutes of Health (NIH) has fared quite well. There is such value for federally funded science research, not only for the basic discoveries that lead to improved health and delivery of healthcare, but it also boost the local economies of organizations that successfully compete for grants and other funding. It even has a multiplier effect, as scientific research leads to local hiring, and those who are hired then spend money at local grocery stores, eating establishments, and other businesses.
Both last year and this year, I have been impressed that a number of Republicans, whose policy views in general I probably mostly disagree with, have been outspoken on the importance of funding biomedical research. It was fascinating for me last year, when Republican Senator Roy Blunt (R-MO) said, A cut to NIH is not a cut to Washington bureaucracy — it is a cut to life-saving treatments and cures, affecting research performed all across the country.
I also enjoyed the camaraderie as well as the funny signs last year and presume I will this year. I appreciate the organizers call for the march to be pro-science and not anti-anything. I hope the turnout will be strong and positive.
I have no hesitation in deciding to participate again this year. I actually find myself less alarmed about the impact of the current political environment on science than I was a year ago. While some areas of science (e.g., climate change) are a good deal more impacted than those of us in the biomedical and informational sciences, the federal budget for science this year reflects the usual bipartisan support, at least the latter areas. Even though I do have concerns for those want to slash the budget of the Agency for Healthcare Research and Quality (AHRQ), the National Institutes of Health (NIH) has fared quite well. There is such value for federally funded science research, not only for the basic discoveries that lead to improved health and delivery of healthcare, but it also boost the local economies of organizations that successfully compete for grants and other funding. It even has a multiplier effect, as scientific research leads to local hiring, and those who are hired then spend money at local grocery stores, eating establishments, and other businesses.
Both last year and this year, I have been impressed that a number of Republicans, whose policy views in general I probably mostly disagree with, have been outspoken on the importance of funding biomedical research. It was fascinating for me last year, when Republican Senator Roy Blunt (R-MO) said, A cut to NIH is not a cut to Washington bureaucracy — it is a cut to life-saving treatments and cures, affecting research performed all across the country.
I also enjoyed the camaraderie as well as the funny signs last year and presume I will this year. I appreciate the organizers call for the march to be pro-science and not anti-anything. I hope the turnout will be strong and positive.
Sunday, April 1, 2018
Response to NIH RFI for Input on Draft Strategic Plan for Data Science
The National Institutes of Health (NIH), the premiere biomedical research organization in the US (and the world), has issued a Request for Information (RFI) that solicits input for their draft Strategic Plan for Data Science. As I did with the request for public input to the now-published Strategic Plan for the National Library of Medicine (NLM), I am posting my comments in this blog as well as submitting them through the formal collection process. I also made a similar posting with my comments on the NLM's RFI for promising directions and opportunities for next-generation data science challenges in health and biomedicine.
The draft NIH data science plan is a well-motivated and well-written overview of what NIH should be doing to insure that the value of data science is leveraged to maximize its benefit to biomedical research and human health. The goals of connecting all NIH and other relevant data, modernizing the ecosystem, developing tools and the workforce skills to use it, and making is sustainable are all important and articulated well in the draft plan.
However, there are three additional aspects that are critical to achieving the value of data science in biomedical research that are inadequately addressed in the draft. The first of these is the establishment of a research agenda around data science itself. We still do not understand all the best practices and other nuances around the optimal use of data science in biomedical research and human health. There are questions of how we best standardize data for use and re-use. What are the standards needed for best use of data? What are the gaps in current standards that can improve them to improve use of data in biomedical research, especially data that is not originally collected for research purposes, such as clinical data from the electronic health record and patient data from wearables, sensors, or that is directly entered?
There also must be further research into the human factors around data use. How do we best organize workflows for optimal input, extraction, and utilization of data? What are the best human-computer interfaces for such work? How do we balance personal privacy and security versus the public good of learning from such data? What are the ethical issues that must be addressed?
The second inadequately addressed aspect concerns the workforce for data science. While the draft properly notes the critical need to train specialists in data science, there is no explicit mention of the discipline that has been at the forefront of “data science” before the term came into widespread use. This is the field of biomedical informatics, whose education and training programs have been training a wide spectrum of those who work in data science, from the specialists who carry out the direct work as well as the applied professionals who work with researchers, the public, and others who implement the work of the specialists. NIH should acknowledge and leverage the wide spectrum of the workforce that will analyze and apply the results of data science work. The large number of biomedical (and related flavors of) informatics programs should expand their established role in translating data science from research to practice.
The final underspecified aspect concerns the organizational home for data science within NIH. The most logical home would be the National Library of Medicine (NLM), which is the new home of the Big Data to Knowledge (BD2K) program that was launched by NIH several years ago. The newly released NLM strategic plan is a logical complement to this plan. (Ideally, the NLM should be more appropriately named the National Institute for Biomedical Informatics and Data Science - NIBIDS - with the Library function being one of its critical functions.)
With the addition of these concerns, the NIH data science plan can make an important contribution to realizing the potential for data science in improving human health as well as preventing and treating disease.
The draft NIH data science plan is a well-motivated and well-written overview of what NIH should be doing to insure that the value of data science is leveraged to maximize its benefit to biomedical research and human health. The goals of connecting all NIH and other relevant data, modernizing the ecosystem, developing tools and the workforce skills to use it, and making is sustainable are all important and articulated well in the draft plan.
However, there are three additional aspects that are critical to achieving the value of data science in biomedical research that are inadequately addressed in the draft. The first of these is the establishment of a research agenda around data science itself. We still do not understand all the best practices and other nuances around the optimal use of data science in biomedical research and human health. There are questions of how we best standardize data for use and re-use. What are the standards needed for best use of data? What are the gaps in current standards that can improve them to improve use of data in biomedical research, especially data that is not originally collected for research purposes, such as clinical data from the electronic health record and patient data from wearables, sensors, or that is directly entered?
There also must be further research into the human factors around data use. How do we best organize workflows for optimal input, extraction, and utilization of data? What are the best human-computer interfaces for such work? How do we balance personal privacy and security versus the public good of learning from such data? What are the ethical issues that must be addressed?
The second inadequately addressed aspect concerns the workforce for data science. While the draft properly notes the critical need to train specialists in data science, there is no explicit mention of the discipline that has been at the forefront of “data science” before the term came into widespread use. This is the field of biomedical informatics, whose education and training programs have been training a wide spectrum of those who work in data science, from the specialists who carry out the direct work as well as the applied professionals who work with researchers, the public, and others who implement the work of the specialists. NIH should acknowledge and leverage the wide spectrum of the workforce that will analyze and apply the results of data science work. The large number of biomedical (and related flavors of) informatics programs should expand their established role in translating data science from research to practice.
The final underspecified aspect concerns the organizational home for data science within NIH. The most logical home would be the National Library of Medicine (NLM), which is the new home of the Big Data to Knowledge (BD2K) program that was launched by NIH several years ago. The newly released NLM strategic plan is a logical complement to this plan. (Ideally, the NLM should be more appropriately named the National Institute for Biomedical Informatics and Data Science - NIBIDS - with the Library function being one of its critical functions.)
With the addition of these concerns, the NIH data science plan can make an important contribution to realizing the potential for data science in improving human health as well as preventing and treating disease.
Monday, March 19, 2018
Physician Training in Clinical Informatics: One Size Does Not Fit All
Readers of this blog know that although I believe that formal recognition of physicians through board certification is great for our field and those who work in it, its implementation as a subspecialty and the requirement of formal ACGME-accredited fellowships as the only pathway to certification are detriments.
Two recent events bear this out. One recent happening is the increasing number of OHSU medical students who seek informatics training during medical school, such as a combined MD/MS program similar to the joint MD/MPH degree that many medical schools offer. The other is a publication of a supplement on the topic of the value of competency-based, time-variable education in the premier journal of medical education, Academic Medicine.
In essence, is a two-year, on-the-ground fellowship the only way to prepare physicians for practice in clinical informatics? As one who has been involved in the training of physicians for careers in informatics by diverse pathways, I take exception. After being halfway through the third year of our ACGME-accredited fellowship at OHSU, I certainly believe it is probably the gold standard for clinical informatics training. Yet it is not clear to me it is the only way, especially for the substantial number of physicians who come to informatics long after they completed their primary medical training and complete one of our graduate degree programs. Or even those who obtain such education during their primary training, such as the students in our MD/PhD program or those who may choose to pursue an MD/MS pathway.
Some question whether I am opposed to rigor in informatics training? Indeed I am not, but I believe there are many approaches to rigor in informatics training. A two-year, time-based fellowship is not the only path to rigorous training in the field.
My preference would be for there to be many pathways to formal clinical informatics training, all with appropriate rigor. All of them should include both substantial coursework to gain the requisite knowledge of the field, and the appropriate hands-on in-the-trenches training to experience the “real world.” Medical training is increasingly abandoning the “time on the ground” model; should not informatics too? I can easily envision a multifaceted path to informatics training where there is an appropriate amount of knowledge-based education (e.g., master’s degree in medical school or mid-career) followed by an appropriate amount of project work (either within the master’s or external to it).
Two recent events bear this out. One recent happening is the increasing number of OHSU medical students who seek informatics training during medical school, such as a combined MD/MS program similar to the joint MD/MPH degree that many medical schools offer. The other is a publication of a supplement on the topic of the value of competency-based, time-variable education in the premier journal of medical education, Academic Medicine.
In essence, is a two-year, on-the-ground fellowship the only way to prepare physicians for practice in clinical informatics? As one who has been involved in the training of physicians for careers in informatics by diverse pathways, I take exception. After being halfway through the third year of our ACGME-accredited fellowship at OHSU, I certainly believe it is probably the gold standard for clinical informatics training. Yet it is not clear to me it is the only way, especially for the substantial number of physicians who come to informatics long after they completed their primary medical training and complete one of our graduate degree programs. Or even those who obtain such education during their primary training, such as the students in our MD/PhD program or those who may choose to pursue an MD/MS pathway.
Some question whether I am opposed to rigor in informatics training? Indeed I am not, but I believe there are many approaches to rigor in informatics training. A two-year, time-based fellowship is not the only path to rigorous training in the field.
My preference would be for there to be many pathways to formal clinical informatics training, all with appropriate rigor. All of them should include both substantial coursework to gain the requisite knowledge of the field, and the appropriate hands-on in-the-trenches training to experience the “real world.” Medical training is increasingly abandoning the “time on the ground” model; should not informatics too? I can easily envision a multifaceted path to informatics training where there is an appropriate amount of knowledge-based education (e.g., master’s degree in medical school or mid-career) followed by an appropriate amount of project work (either within the master’s or external to it).
Thursday, February 22, 2018
Next Frontier for Informatics Education: College Undergraduates
In the upcoming spring academic quarter (April-June, 2018), some faculty and I from our Department of Medical Informatics & Clinical Epidemiology (DMICE) will be pursuing a new frontier of informatics teaching, launching an introductory health informatics course for college undergraduates. The course will be offered in the new joint Oregon Health & Science University (OHSU)-Portland State University (PSU) School of Public Health (SPH). The new school merged previous academic units in public health from OHSU and health studies programs at PSU.
Our goals for the course are to introduce informatics skills and knowledge to undergraduate health-related majors as well as raise awareness about careers and graduate study in biomedical and health informatics.
As noted in the course syllabus, the learning objectives for the course include:
Our goals for the course are to introduce informatics skills and knowledge to undergraduate health-related majors as well as raise awareness about careers and graduate study in biomedical and health informatics.
As noted in the course syllabus, the learning objectives for the course include:
- Introduce students to problems and challenges that health informatics addresses
- Introduce students to the research and practice of health informatics
- Provide all students with basic skills and knowledge in health informatics to apply in their future health-related careers
- Lead students in discussion around ethical and diversity issues in health informatics
- Provide additional direction to those interested in further (i.e., graduate) study in the field
- Overview of Field and Problems That Motivate It
- Health Data, Information, and Knowledge
- Electronic Health Records
- Personal Health Records and Decision Aids
- Information Retrieval (Search)
- Bioinformatics
- Informatics Applications in Public Health
- Data Science, Analytics, and Visualization
- Ethical Issues in Health Informatics
- Careers in Health Informatics
Wednesday, February 7, 2018
The Three Parts of My Job: What I Love, Like, and Dislike
I am very thankful in life to have a career that is both enjoyable and rewarding. Years ago, a head hunter recruiting me for a different position asked what my ideal job would be. I paused for only a second or two, and then stated that my current job was my ideal job. It still is. I do not necessarily enjoy every minute of every day, but as I often tell people, I enjoy going to work most days, which is a pretty good indicator of how much one likes their job.
At other times, I tell people that I can break the activities of my job into three categories, which are (a) activities I enjoy and find deeply satisfying, (b) activities that I like that also enable things in the first category, and (c) things I truly dislike.
Most of the parts of my job I truly enjoy involve either my intellectual work in the biomedical and health informatics field or interactions with students and colleagues. Certainly among the major things I love revolve around teaching. I believe I am particularly skilled at taking the complexity of the informatics field; distilling out the big picture, including why it is important; and conveying it through writing, speaking, and other venues. I also enjoy teaching because it requires me to keep up to date with the field. I enjoy constantly learning myself, especially as new areas of the field emerge.
I also enjoy my interactions with people, especially students. I sometimes half-joke that my interactions with learners provides me a similar kind of satisfaction that I no longer get since I gave up practicing medicine a decade and a half ago. One really nice aspect of mentoring learners is that they come in all ages and experiences. I am no longer very young, but some of the people I teach are older than me. I also enjoy mentoring others, including those who have completed their education and are advancing in the field. This especially includes young informatics faculty, both at my university and at others.
Another enjoyable aspect of my job is disseminating knowledge in diverse ways. I have found the Internet as a platform and educational technology as a vehicle to share my knowledge. As noted in a previous post, I also enjoy the opportunity to travel around the world and see informatics play out in other cultures and economies.
The second category of my work consists of activities that I like, or at least do not find onerous. Many of these activities enable my being able to do those in the first category. These include many of my administrative duties as Chair of my department. Fortunately my leadership role in my department is nowhere near a full-time job, which means that I am still able to spend plenty of time on the activities in the first category above.
Finally, there are some aspects of my job that I dislike. Most of these revolve around less-than-pleasant interactions with people with whom I work. One thing I particularly do not enjoy is managing conflicts among those who report to me. I also do not enjoy managing those who do not meet reasonable expectations for their work. And of course there is no fun when budgetary problems arise.
I sometimes think back to a conversation I had a couple years ago with the now-retired President of our university, who was previously the Dean of the School of Medicine. He lamented that one down side to reaching his level was that he did not get to work in his field (ophthalmology) any more. This really struck me, and made me realize that informatics is what makes me work life interesting, and I could never see completely giving up the intellectual side of the field.
At other times, I tell people that I can break the activities of my job into three categories, which are (a) activities I enjoy and find deeply satisfying, (b) activities that I like that also enable things in the first category, and (c) things I truly dislike.
Most of the parts of my job I truly enjoy involve either my intellectual work in the biomedical and health informatics field or interactions with students and colleagues. Certainly among the major things I love revolve around teaching. I believe I am particularly skilled at taking the complexity of the informatics field; distilling out the big picture, including why it is important; and conveying it through writing, speaking, and other venues. I also enjoy teaching because it requires me to keep up to date with the field. I enjoy constantly learning myself, especially as new areas of the field emerge.
I also enjoy my interactions with people, especially students. I sometimes half-joke that my interactions with learners provides me a similar kind of satisfaction that I no longer get since I gave up practicing medicine a decade and a half ago. One really nice aspect of mentoring learners is that they come in all ages and experiences. I am no longer very young, but some of the people I teach are older than me. I also enjoy mentoring others, including those who have completed their education and are advancing in the field. This especially includes young informatics faculty, both at my university and at others.
Another enjoyable aspect of my job is disseminating knowledge in diverse ways. I have found the Internet as a platform and educational technology as a vehicle to share my knowledge. As noted in a previous post, I also enjoy the opportunity to travel around the world and see informatics play out in other cultures and economies.
The second category of my work consists of activities that I like, or at least do not find onerous. Many of these activities enable my being able to do those in the first category. These include many of my administrative duties as Chair of my department. Fortunately my leadership role in my department is nowhere near a full-time job, which means that I am still able to spend plenty of time on the activities in the first category above.
Finally, there are some aspects of my job that I dislike. Most of these revolve around less-than-pleasant interactions with people with whom I work. One thing I particularly do not enjoy is managing conflicts among those who report to me. I also do not enjoy managing those who do not meet reasonable expectations for their work. And of course there is no fun when budgetary problems arise.
I sometimes think back to a conversation I had a couple years ago with the now-retired President of our university, who was previously the Dean of the School of Medicine. He lamented that one down side to reaching his level was that he did not get to work in his field (ophthalmology) any more. This really struck me, and made me realize that informatics is what makes me work life interesting, and I could never see completely giving up the intellectual side of the field.