Tuesday, June 6, 2023

Reports of My Retirement Are Greatly Exaggerated

As most of my world knows, I stepped down as Chair of the Department of Medical Informatics & Clinical Epidemiology (DMICE) at Oregon Health & Science University (OHSU) at the end of last year. In addition to announcements made by OHSU, I let my world know with postings in this blog, on social media, and via some email lists.

I was gratified by the many nice accolades that I received but one thing was unsettling. This was those who wrote to congratulate me on my retirement. It was unsettling because I wanted to make explicit that I was not retiring, and instead was passing on the leadership to others, initially to interim leaders from the department and eventually culminating in a search for a new permanent Chair. In the meantime, I would continue the teaching, research, writing, and mentoring activities that I enjoy so much.

Nearly a half-year in to my new status, I am indeed enjoying it. I still wake up each work day and get to do the intellectual work of informatics that I love most. My circumstance brings to mind a famous quote, yet which he probably never said, by Samuel Clemons (Mark Twain) that reports of his death were greatly exaggerated.

I am therefore please to state that reports of my retirement are indeed exaggerated. I hope to continue working and be productive in the field for many more years. In fact, my plate is as full as ever, and my main work going forward will continue on the teaching, research, and funded grant projects that currently keep me busy.

My "plate" includes, among other things, the following major activities:

As can be seen, I have plenty to keep me busy for some time to come. I plan to continue my work on all of above while supporting the new leadership of the department as it evolves in novel and interesting directions.

Monday, May 22, 2023

It's All Informatics to Me: Data Analytics, Data Science, Digital Health, AI, and Whatever is Next

In the last decade or so, a number of new names have emerged for the kinds of activities historically studied, disseminated, and taught by those who work in biomedical and health informatics. Each of these areas has emerged as a "hot topic" in biomedicine and healthcare, with resulting academic funding opportunities, new courses or degree programs, and even academic centers or institutes bearing their names.

I suppose I have some skin in this game because I have spent my life's work among those developing the field that I consider to go by the over-arching term of this all, biomedical and health informatics. My concern for the new areas and their names has been their often ignoring the historical and current work of informatics, and with it the decades of research, development, implementation, and evaluation of methods and tools that aim to improve health and healthcare through the use of data and information.

Another ramification of this proliferation of terms is that those not directly working in the field, who may include the leaders of academic and other institutions, may not appreciate the historic role of informatics. Thus I write this post to provide my elaboration of the problem.

One early new term that started this trend about a decade ago was data analytics. As the growth of large amounts of digital data started with the growth of the Internet, the large-scale implementation of electronic health records, and the emergence of wearable and mobile devices, there were new sources of data that could be analyzed for interesting discoveries. From this started the proliferation of academic courses, certificate/degree programs, and centers/institutes devoted to healthcare (and related areas) data analytics.

With the proliferation of machine learning, deep learning, and other methods emerged the new discipline of data science. Again, this was followed by the proliferation of academic courses, certificate/degree programs, and centers/institutes focused on data science. One thoughtful perspective of the relationship between informatics and data science was a paper by Payne et al. in JAMIA Open in 2018, which showed there was not only substantial overlap but also aspects of informatics that go beyond data science. Infomatics includes implementation, evaluation, human-computer interaction, and workflow, to name a few. To use the language of the paper, these were the "efferent" processes of applying knowledge discovered from the "afferent" processes of data science (and informatics) to the real world.

In the meantime, especially with the growth of telehealth during the COVID-19 pandemic, there emerged a new term, digital health. Similar to data science, there was substantial overlap between digital health and informatics, with that overlap focused on many on the efferent processes of Payne et al. However, for many, the work of digital health is really no different than what informatics has historically done and continues to do.

And now there is the new term on the block, which is actually not a new term at all but a re-emergence of an area that was the focus of informatics a half-century ago. This term is of course artificial intelligence (AI). Clearly the advances of modern AI are very real and important. The new AI will likely have profound impacts on biomedicine and health, although we also need to see real-world implementation and evaluation to know what truly works.

My goal here is not to discount any of the important advances that have emerged under the guise of data analytics, data science, digital health, or AI. But rather, to express my concern that plenty of people tout these new disciplines sometimes with ignorance of the historic role that informatics has played in them all. I understand that sometimes funders, institutional leaders, and others want to put their resources into things that are "new." But I also am disappointed when these new areas re-invent the methods and tools of informatics.

The best of informatics imposes a rigor of thinking that these newer disciplines not always apply. Informatics is driven by problems in the domains of health, healthcare, public health, and clinical research. Health and clinical realism is imposed by informatics on the systems as they develop, implemented, and evaluate. I hope that these new areas and what emerges next will not ignore the lessons learned by informatics and incorporate them into their work.

Tuesday, April 11, 2023

Learning Data Science and Machine Learning with Orange

I have written over the years that understanding data science (DS), machine learning (ML), artificial intelligence (AI), and related topics is essential for anyone who works in a profession where tools using them might be employed. This is certainly the case in biomedicine, both in clinical practice and research. In the same sense that all physicians should have basic knowledge of pharmacology, radiology, and other topics, even if they are not experts in the science and methods of those diverse disciplines, all in healthcare should be ready for ML and AI tools coming to their work.

Of course, learning to apply ML and AI historically has required a background in math and computer programming that most in healthcare do not have. Fortunately, there has been emergence in recent years of a number of "no-code" DS and ML tools. In some ways they are analogous to statistical packages, which allow anyone who knows how to use them load data and run statistical tests. Of course, that does not mean that the operator of statistical software will apply or interpret the results of their analyses correctly, which is especially important for large or complex data sets where appropriate statistical expertise is essential.

One no-code ML tool that I find particularly easy to understand and use is Orange Data Mining. Orange was developed by the University of Ljubljana in Slovenia and is an open-source project with a large community supporting it. It runs on Mac and Windows machines, for which binary packages can easily be downloaded and installed. Orange is programmed in Python and, as such, can be extended with Python code. It comes with sample data sets and allows easy loading of new ones. The original focus of Orange was for structured data and machine learning models for operating on them, including the main ML tasks of classification (e.g., logistic regression, kNN, and neural networks) and regression (e.g., linear regression, random forest, and neural networks). There are also methods for visualizing data and evaluating models. The latter includes techniques such as splitting training and test data and cross-fold evaluations are easy to configure. There are also various methods for imputation of missing data. In addition, modules have been developed for image analysis and text mining.

Orange is used by dragging widgets that represent data sets, visualization methods, models, and more on to a canvas called a workflow. Data flows between the widgets by connecting them via simple dragging between widgets. Once one becomes facile in dragging and connecting widgets, workflows can be built up quickly and easily. They can also be saved and reloaded later.

One can start with one of the built-in data sets or use the File widget to load one's own data. Most common rectangular data formats are accepted, such as CSV, tab-delimited, and Excel files. Included among the sample data sets is one for heart disease. The data itself is old (Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J., Sandhu, S., Guppy, K., Lee, S., & Froelicher, V. International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology, 1989, 64, 304-310.) and small (303 records with 13 variables), but the data set offers a good way to get started with Orange using clinical data. The outcome variable (called the target variable in Orange) is coronary artery diameter narrowing and the predictor variables include gender, chest pain, serum cholesterol, systolic blood pressure, and more.

The screenshot below shows a workflow I built using the heart disease data set. As in most data science analyses, one can begin with methods to explore and visualize the data. I connected a number of widgets that allow the data to be viewed in a data table and visualized in the form of feature statistics (also shown below), distributions, and box plots. I also created some models using the Test and Score widget connected to a number of different models appropriate for classification, in this case logistic regression, naive Bayes, and a neural network. One can view the classification accuracy, F1 score, and other evaluation variables (also shown below) and connect the results to widgets that compare ROC curves and generate confusion matrices for each model. (Perhaps not surprisingly, logistic regression works best!) Likewise, I created some models for regression with ST-segment elevation as the outcome variable, using linear regression, random forest, and a neural network, which can also be seen on the workflow.

It is fairly easy to get started using Orange. A set of video tutorials on the Orange YouTube channel covers the basics as well as a number of advanced topics. My colleague Dr. Robert Hoyt and his collaborator, Dr. David Pashtikoff, have been developing workshops and other learning materials using Orange, available at a Web site they have developed called, NoCode Data Science. They plan to publish a textbook, and setting up a login on the site allows access to a number of additional data sets that can be loaded into Orange.

Just as with statistical packages, just because one can load data into Orange and apply ML models does not mean that one is doing so correctly. On the other hand, my simple workflow just scratches the surface of what can be done with Orange, even for this tiny data set.

There are limitations of Orange as a data mining tool. I do not know how scalable it is as data sets get large (all data is kept in computer RAM), and of course any serious ML analysis requires more than the ability to just plug data into models. Nonetheless, Orange can be a great tool for learning machine learning without getting bogged down in coding, debugging, and the like. I see a great future for it in teaching ML as the reach of AI advances.

Friday, March 17, 2023

COVID-19, Three Years On

This month marks three years since the start of lockdowns in the United States for the COVID-19 pandemic. As I imagine is true for most people, few upheavals in my life have had such a profound and sustained impact than three years of the pandemic. The good news is that it seems to be receding to an endemic, and while the SARS-CoV-2 is still a threat to many, life is mostly getting back to some semblance of normal.

Looking back, the year 2020 began like most others.  In January, I traveled to Singapore, as I usually did in that month each year, to culminate an offering of the 10x10 course. Little did I realize that would be my last international trip for a year and a half. My final trip prior to the pandemic was a short jaunt to Salt Lake City to give Biomedical Informatics Grand Rounds at the University of Utah. By then, the people on flights and at airports were thinning out. I had decided right before leaving for Salt Lake City to cancel a planned trip to the annual HIMSS Conference in Orlando the following week. Within a few days, that conference was cancelled. I remember the flight back from Salt Lake City, wondering to myself how long it would be before I got on an airplane again. I never would have believed it would be about a year.

I still remember the dark, early days of the pandemic. Such a lockdown of society was unlike anything I had ever experienced, and our ignorance of the novel SARS-CoV-2 virus was unsettling. All of a sudden, work profoundly transformed from being based in the office to being at home. Fortunately the presence of video tools such as Zoom and WebEx, along with social media, enabled most of us to stay in touch with friends, family, and colleagues. I initially thought the lockdown would be much like the original SARS epidemic of 2003, lasting a few months. I never would have believed that mostly-remote work would continue to now, three years later.

After three years, the world is returning to some semblance of normal. In a few months, the COVID-19 emergency in the US will officially end. I am comfortable with this decision, although the lack of a pandemic does not mean that the threat of this virus or new ones that may emerge is not real. Just as we have ongoing risk of major killers such as cancer and heart disease, for which we can impact through public health measures, the risk of COVID-19 will continue, and the end of the public health emergency does not mean "COVID is over."

I prefer to get my information from medical journals and reputable experts. Recent perspective pieces in New England Journal of Medicine and Nature summarize the situation well. I have been a follower of Dr. Eric Topol's tweeting and now Substack during the pandemic, and a recent perspective from him is enlightening. And the Washington Post had a good op-ed asking a number of COVID-19 experts how they are adapting to the current situation.

My view is that now that COVID-19 is receding to endemic status, we can revisit our approaches to the virus. The virus will be with us going forward, and continue to be a major cause of death and other disability. But I am comfortable with moving to an individual risk assessment approach. I still take precautions. While I no longer routinely mask in public, including indoor settings, I always carry a mask and use it sometimes in crowded indoor settings. I always mask when someone else asks me to do so. I am up to date on vaccines, having had five, including the bivalent booster (as well as one mild natural infection in 2022).

Probably the saddest aspect of the COVID-19 pandemic is the politicization of public health. I get that there are trade-offs in the world between personal liberty and the public good. And I believe an honest re-assessment of what we thought we did right or wrong early in the pandemic is not a bad idea. Clearly we were flying blind in those dark early days. While I do not believe that the early lock-downs and mask requirements were unreasonable at the time, we could have an honest re-assessment now of what is the best approach. By the time token, we can never forget that over a million US lives have been lost and many more disrupted by the virus. Those who pushed unproven treatments and approaches like "let er rip" herd immunity strategies likewise need to called out for their wrong ways.

The threat of COVID-19 is still very real. It remains the third-leading cause of death in the US. It creates a substantial risk for those who are vulnerable, such as those who are immunocompromised or elderly. Data from the CDC and around the world show that while the vaccine does not eradicate the virus or completely prevent its transmission, it does reduce the risk of death and hospitalization, especially for those at most risk.

By the same token, I feel comfortable going mostly maskless, including on airplanes, going to the gym, and at conferences and other public venues. I know that I could again get infected by SARS-CoV-2, but I believe my risk of serious illness is small. I like to think of myself as a COVID "tweener," taking the virus seriously but comfortable returning to relatively normal living. I am ready to return to more aggressive protection should pandemic status recur.

Monday, February 6, 2023

Who Should Program?

Like many who work with computers and other information technology, I enjoy programming. I got my start in programming when I was first exposed to computers in high school in the 1970s, when almost all you could do with computers required programming. Like many at the time, the first programming language I learned was BASIC. I subsequently cycled through many of the common programming languages of the 1970s and 1980s, including PL/1, Pascal, and C (although, for whatever reason, never learned FORTRAN or COBOL). I even dabbled some in assembly language during my medical informatics postdoc fellowship in the late 1980s. Once I started my biomedical informatics academic faculty position in 1990, I began to follow the lead on programming languages from students and other trainees, through Perl, Python, R, and others. 

But as I ascended the ranks of academic leadership, my time for programming diminished. I still have periodic bursts of learning and programming now, but do not have the sustained time available to be productive with them. In some ways, programming is akin to the practice of medicine for clinicians who work in academic informatics - one may be able to keep up with the knowledge of clinical medicine, or even master access to knowledge through modern tools - but still the practice of medicine requires a critical mass of time commitment for one to remain an effective clinician. I learned this in the early 2000s when my informatics work pretty much crowded out my ability to effectively provide patient care, especially trying to practice as a primary care internist.

Interestingly, I sometimes I get into discussions with physicians who are training in informatics and desire to commit a great deal of time and effort to programming. Some of them can pull it off, but as I note to many, they are probably too expensive to be programmers (even as salaries for high-end programmers are reaching the lower strata of physician salaries). Furthermore, physicians have knowledge and perspectives that may be most useful for tasks other than programming, such as conceptual thinking and clinical leadership.

It is actually questionable whether to leave programming to those who are academics, including biomedical researchers. One recent analysis looked at a repository of replication data sets and code published in the Harvard Dataverse Repository between 2010-2020 (1). It found that about three-quarters of R programs failed to execute correctly, although there was some improvement when code-cleaning was applied.

Another challenge is that code sometimes is maintained by a small number or even just one person, the latter in the case of a particle physicist who was the sole maintainer of a 1980s program and has now retired (2). Some have expressed concern for continued progress in the field. Previous work found similar problems in functional MRI (3) and computational chemistry (4) analysis. And many geneticists have had to deal with idiosyncrasies such as the propensity of Microsoft Excel to change gene names to dates (e.g., MARCH1 to 1-Mar) (5).

By the way, I am in no way advocating that modern-day higher-education students, especially in science fields, not learn computer programming as part of their education. Programming is an important skill and discipline, facilitating critical and logical thinking. As computers play an important role in so many knowledge-based disciplines in modern times, it is critical to de-mystify them and allow them to be used most effectively in all scientific disciplines.

Another important development over the years has been the emergence of tools that allow management and manipulation of data with minimal amounts of programming. Probably the earliest example of this was the spreadsheet. Also of note are statistical software packages that allow one to run a myriad of statistical tests with little or no programming. Of course, in both of these examples, one must still understand their data and appropriate inferences drawn from it. Another more recent example of this is packages such as Orange, which provide a visual programming interface to many data science and machine learning methods. I will have more to say about Orange in a future post.

We can also gain insight from the recent development of ChatGPT. One prominent early use has been asking it to write code for specific functions in different languages. While ChatGPT does produce code that looks like what is being asked for, programming support sites like Stack Overflow have banned postings of such code because although they look correct, there are often subtle errors or other problems (6).

It is clear that there will always be a role for learning and exploring with computer programming. But given the complexity of modern software, and the need for programs to be robust, correct, and providing security, those who enjoy programming should be realistic about the commitment required to carry it out at "industrial-strength" levels.


1. Trisovic, A., Lau, M.K., Pasquier, T., Crosas, M., 2022. A large-scale study on research code quality and execution. Sci Data 9, 60. https://doi.org/10.1038/s41597-022-01143-6

2. von Hippel, M., 2022. Crucial Computer Program for Particle Physics at Risk of Obsolescence. Quanta Magazine. https://www.quantamagazine.org/crucial-computer-program-for-particle-physics-at-risk-of-obsolescence-20221201/

3. Hirschler, B., 2014. Exclusive: Software issue casts doubt over data used to approve some drugs. Reuters. http://finance.yahoo.com/news/exclusive-software-issue-casts-doubt-over-data-used-121215317--finance.html

4. Eklund, A., Nichols, T.E., Knutsson, H., 2016. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proc Natl Acad Sci U S A 113, 7900–7905. https://doi.org/10.1073/pnas.1602413113

5. Vincent, J., 2020. Scientists rename human genes to stop Microsoft Excel from misreading them as dates [WWW Document]. The Verge. https://www.theverge.com/2020/8/6/21355674/human-genes-rename-microsoft-excel-misreading-dates

6. Vincent, J., 2022. AI-generated answers temporarily banned on coding Q&A site Stack Overflow. The Verge. https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers

Wednesday, January 25, 2023

ChatGPT Confabulates and is Unprofessional

Since my own explorations with ChatGPT in informatics knowledge last month, others using the system have racked up some impressive results. Most pertinent to biomedical informatics is ChatGPT's passing grades on medical student licensing exams. It has also performed well on Wharton MBA and multi-state bar exams.

Does this mean that machine intelligence has equaled human professionals? My subsequent encounters with ChatGPT lead me to maintain my skepticism. Not that ChatGPT is not a tremendous feat, but it is not (yet anyways) capable of performing as a physician, lawyer, or MBA.

This conclusion is evidenced from another recent interaction I had with ChatGPT, the transcript of which is at the bottom of this post. This interaction mirrors some characteristics that many others note. One property that many find is ChatGPT's tendency to confabulate. As seen in my latest interaction, it states things that are simply false. Not that humans do not do this, but no one would rely on a physician, lawyer, or MBA known to flagrantly lie. As such, ChatGPT fails at one core task of being a competent professional in a given field, where there is an ethical responsibility to recognize and state when you do not know something. The best physician is one who knows the limits of his or her knowledge and skills. (Of course, ChatGPT lacks many other capabilities of a physician, from technical skill to empathy.)

This leads me to a thought experiment, perhaps somewhat biased by the fact I am an information retrieval (IR) researcher. If you were confronted with a highly challenging intellectual task, which would you use first, Google (or another search system, maybe PubMed for medicine) or ChatGPT? In my mind, an IR system is much more appealing, giving what IR systems provide by definition, which is pointers to information that may answer questions. Google is far from a perfect search engine, but when addressing a complicated intellectual task, I would much rather take my chances with an IR system output with primary sources listed and accessible than the narrative without citations that comes out of ChatGPT.

I have no doubt that systems like ChatGPT will continue to advance, and that human knowledge workers will find ways to use them beneficially in their work. But I cannot imagine there not being a human in the loop somewhere, especially in medicine when you might need the skills of a surgeon, not to mention a physician with empathy and grace. I am also reminded of the quote from my colleague, radiologist Curtis Langlotz, probably speaking for all of medicine, who said, "Will AI replace radiologists? is the wrong question. The right answer is: Radiologists who use AI will replace radiologists who don’t."

Here is the transcript from my most recent interaction with ChatGPT, with my annotations in italics. I would also note, perhaps in a blow to my own ego, that ChatGPT's answer does not include any of my own IR writings, such as the recent edition of my biomedical IR textbook, which arguably are important to this field.

I entered to Chat GPT, What are some good readings for biomedical information retrieval?

There are many resources available for learning about biomedical information retrieval, including books, articles, and online courses. Here are a few suggestions:

"Introduction to Biomedical Information Retrieval" by C. J. van Rijsbergen is a classic textbook that provides a comprehensive introduction to the field.

Keith van Rijsbergen is a retired computer science professor and IR researcher who has written several books but not one called "Introduction to Biomedical Information Retrieval."

"Information Retrieval in Biomedicine" by Peter Willett is another comprehensive textbook that covers the basics of biomedical information retrieval as well as advanced topics.

Peter Willett is a retired computer science professor and IR researcher who has written several books but not one called "Information Retrieval in Biomedicine."

"Medical Information Retrieval" by J. V. McInerney is a book that provides an overview of the field and covers various topics such as search engines, natural language processing, and data mining.

I am not able to find any book titled "Medical Information Retrieval" or anyone in the IR or informatics field named J. V. McInerney.

"Biomedical Information Retrieval" by J. S. Ogasawara and M. R. Al-Rawi is a book that covers various aspects of biomedical information retrieval, including text mining, ontologies, and evaluation.

I am not able to find any book titled "Biomedical Information Retrieval" or anyone in the IR or informatics fields named J. S. Ogasawara or M. R. Al-Rawi.

"Biomedical Text Mining" by Haifeng Li and Linyun Fu is a book that covers the use of text mining techniques for extracting and analyzing information from biomedical literature.

There is a book named "Biomedical Text Mining" but it does have authors Haifeng Li or Linyun Fu, neither of whom work in IR or informatics.

The Journal of Biomedical Informatics is a peer-reviewed journal that publishes research on various aspects of biomedical information retrieval.

This is true!

The International Conference on Medical Information Retrieval is an annual conference that brings together researchers and practitioners in the field to present and discuss their work.

I am not aware of any such conference by this name.

I hope these suggestions are helpful! Let me know if you have any other questions.

I will!

Saturday, December 31, 2022

Annual Reflections at the End of 2022

I am keeping my tradition in this blog this year of ending the calendar year with a retrospective on the past year and some musings looking forward.

Clearly our lives were still dominated in 2022 by the COVID-19 pandemic. While a fair amount of living has returned to relative normal, there are some changes that will likely never fully reverse. Certainly a major one for myself and my immediate co-workers is the likely continuation of mostly remote work.

Looking back, my last annual year-end post from the "before times" was in 2019. It was an ordinary post, discussing the history of the blog starting from the times of the passage of the HITECH Act, along with the continued growth of my teaching and use of Twitter. I noted over 2600 people having completing the 10x10 course and my surpassing 2600 followers on Twitter. Now at the end of 2022, 10x10 has exceeded 3100 people completing the course and my Twitter followers number over 3500. Of course, with the recent acquisition of Twitter by Elon Musk, the future of this platform is uncertain. That is unfortunate, since I find activities like #ScienceTwitter and #MedTwitter very helpful for keeping up with new advances, some of which informs my teaching.

The day of that 2019 year-end post was the day that the first case of SARS-CoV2 infection was documented in China. Within a month cases started appearing in the US, and by March, the general US lockdown began, with the transition to remote work. While we have made great progress in treatments and vaccinations for COVID-19, my work is still mostly remote. In reality, this is an extension of pre-pandemic times, when I even noted then that my work was mostly asynchronous. And it is not hard to see how it will likely continue mostly that way, as there are plenty of advantages to my office commute being the walk down the stairs to my home office in my basement. I also enjoy being able to do my daily exercise (running or gym) in the late morning, followed by a healthy lunch, and then some quiet meditation after lunch before getting back to work. In the early days of pandemic, the Internet was highly overtaxed in my neighborhood of many professionals working from home, but the infrastructure has been built up such that my current home Internet is well up to the task.

Back to this year, it was a good year for my professional and personal life. I made the decision to hand off the leadership of the department I have served as the one and only Chair. The Dean of the OHSU School of Medicine will make an announcement of the appointment of a new Interim Chair next week. As noted in my kudos posting from last week, my research and teaching work was highly productive in 2022, and the activities and grant funding will keep it going for at least several more years. I aim to keep working focused on research, teaching, mentoring, and writing, and have no plans to retire any time soon. I will also maintain this blog, which with this posting now has 375 postings.

As for the future, COVID-19 will continue to impact all of what we do. But there is less fear to have now, as we understand the virus and the disease it causes much better, and we have means to blunt the worst of its effects. Many quip that I have an optimistic view of the world and future, and I am certain this will continue in 2023 including and beyond COVID-19.

Monday, December 19, 2022

Kudos for the Informatics Professor - 2022 Update

The year 2022 was a highly productive one for the Informatics Professor, and I am pleased to provide my periodic update of my recent accomplishments. This year had another big event in my decision to step down as the one and only Chair of the Oregon Health & Science University (OHSU) Department of Medical Informatics & Clinical Epidemiology. More will be posted about that when my successor is named. Meanwhile, here are my accomplishments for 2022.

I received two prominent awards this year. One was from the American Medical Informatics Association (AMIA), my main professional association, for the William Stead Award for Thought Leadership in Informatics. This award "acknowledges people who have influenced our thinking about informatics, especially improving health and health care in ways that are visionary and transformative." This year I was also elected to the second class of the ACM SIGIR Academy, which was established to "honor and recognize individuals who have made significant, cumulative contributions to the development of the field of information retrieval (IR)." I am thrilled to be honored by the two fields of which I work at the intersection between them.

This year I completed my term as the Second President of the International Academy of Health Sciences Informatics (IAHSI). I will continue for another two years as Past President.

In 2022, I was awarded several new grants, which will help sustain the research and teaching that I plan to continue as I reduce my administrative and leadership roles. Probably the most important of these was the OHSU Training Grant in Biomedical Informatics and Data Science, which was renewed for another five year cycle through 2027. This training grant from the National Library of Medicine (NLM) of the National Institutes of Health (NIH) funds predoctoral (PhD) and postdoctoral trainees in the OHSU Biomedical Informatics Graduate Program. This grant has been funded continuously at OHSU since 1992. This renewal marks the seventh consecutive five-year cycle of funding for OHSU. Its first 30 years make it the second-longest-running training grant at OHSU. The grant's total funding over that time of $21,912,538 make it the largest training grant ever at OHSU. I myself am a product of this training grant, completing my postdoctoral fellowship at Harvard University from 1987-1990.

Another grant from the NLM for which I am PI is an R25 grant to fund the OHSU Biomedical Informatics and Data Science College Undergraduate Summer Internship Program for five years. The grant is among 12 awards issued by the National Library of Medicine to prepare historically underrepresented students for biomedical informatics and data science graduate studies and research careers. The goal of the our program is to provide a rich, full-time experience to students to graduate study and careers in biomedical informatics and data science. In addition, the program aims to motivate students to pursue doctoral training in biomedical informatics and data science that leads to research careers in academia, government, healthcare, and industry. The funding from the R25 grant will be used to support summer experiences for students with backgrounds that have been historically underrepresented in biomedical informatics and data science.

A final new grant for 2022 was from the new NIH Bridge2AI Initiative. I am among a group of faculty from OHSU and Washington University who will be serving as the Skills and Workforce Development Module for one of the four funded data-acquisition projects. The project we are part of will focus on creating data sets that aim to use voice as a biomarker of health.

I also had a good year of publications in 2022, not the least of which was the release of the 8th edition of Health Informatics: Practical Guide, in which I serve as Editor and have authored about a third of the chapters. My journal publications for 2022 included the following:

In 2022, I was also a guest on a couple of podcasts. One was to provide an introduction to an episode on of the For Your Informatics podcast on the gender pay gap among physicians, including those in informatics. The other was to be interviewed for the IAHSI podcast on Exploring Healthcare Interoperability about my work in teaching about healthcare data standards and interoperability.

I also gave a number of invited talks in 2022:

  • Biomedical and Health Informatics: Impact, Challenges, and Opportunities, Nevada Institute of Personalized Medicine, University of Nevada Las Vegas (virtual), January 27, 2022 (slides and references)
  • Competencies and Curricula Across the Spectrum of Learners for Health Informatics, Keynote Talk, IEEE International Conference on Healthcare Informatics, First International Workshop on Health Informatics Education, Rochester, MN, June 11, 2022 (slides and references)
  • Translational Artificial Intelligence (AI): The Need to Translate from Basic Science to Clinical Value, University of Alabama Birmingham Informatics Institute (virtual), September 2, 2022 (slides and references and video)
  • Past, Present, and Future: A Discussion About DMICE Going Forward, OHSU Department of Medical Informatics & Clinical Epidemiology, October 6, 2022 (slides and video)
  • Applying Information Retrieval to the Electronic Health Record for Cohort Discovery and Rare Disease Detection, Brown University Center for Biomedical Informatics, Providence, RI, October 13, 2022 (slides and references)
  • Competencies and Curricula Across the Spectrum of Learners for Biomedical and Health Informatics, International Symposium on Achievements, Milestones and Challenges in Biomedical and Health Informatics, Athens, Greece, October 29, 2002 (slides)

In 2022 I also delivered my annual lecture to first-year OHSU medical students, Information is Different Now That You're a Doctor, on September 2, 2022. I also gave this talk to first-year medical students at Nova Southeastern University (virtual) on June 16, 2022.

I additionally attended my 37th consecutive AMIA Annual Symposium, where I participated in three activities:

  • Panel: Beyond Wrangling and Modeling: Data Science and Machine Learning Competencies and Curricula for The Rest of Us
  • Job Talk Panel for the AMIA Clinical Informatics Fellows (ACIF)
  • Debate: Extension of the Practice Pathway for ABMS Clinical Informatics Board Certification for Physicians in the United States

This year I also served as Co-Chair of the Scientific Program Committee of the conference hosted by Brown University, The Patient and The Practitioner in the Age of Technology: Promoting Healing Relationships.

In 2022, I also maintained my various scientific citation rankings. In the list of Best Computer Science Scientists, I now rank 694th in the US and 1183rd in the world based on a discipline-specific H-index they calculate. I also maintain a presence in the latest iteration of a database of the top 100,000 cited scientists in the world for 22 scientific fields and 176 subfields, where I rank 22,553rd based on a composite formula of citation and publishing variables. In the medical informatics subfield, I rank 13th. Of the 290 individuals affiliated with OHSU, I rank 50th. I also maintain my presence on ranking lists based on the Google Scholar h-index for biomedical informatics and information retrieval.

Finally, this blog is mentioned in various lists of best blogs, including the following:

Saturday, December 17, 2022

ChatGPT Postscript

It has been interesting to read all of the uses that colleagues of mine and others have employed to put ChatGPT to the test, in addition to my tests of learning objectives, a biography, and multiple-choice questions. In one email thread, I joked that perhaps ChatGPT is a Rorschach Test of what interests people concerning the use of artificial intelligence.

I also was pointed to an interesting site that bills itself as a ChatGPT (actually trained on an earlier version of the OpenAI model, GPT-2) Output Detector Demo. I pasted my biography from my first post and this system declared the text had a 99.98% chance of being "fake," i.e., from GPT-2. When I paste in in the biographic paragraph from my own Web page, it declares the text to having a 99.97% chance of being real.

Another interesting reflection is to compare ChatGPT with information retrieval (IR, aka search). Perhaps I am biased as an academic, or someone greatly interested in IR since it is my primary focus of research, but usually when I look for information, I not only want to know the information, but also where it comes from and how trustworthy it is. A big limitation for me of ChatGPT is that it cites no references to back up what it says.

This gets to another academic concern about ChatGPT, which is how it will impact assessment of learning. Although ChatGPT seems to work best for relatively short passages of text that do not require references, there are fortunately many other ways to assess learning.

There have also been some good overviews in the news media about ChatGPT, including an interview of ChatGPT itself. There is also a nice description from the New York Times.

Tuesday, December 13, 2022

ChatGPT, Round Two

After sending some comments about my blog post on my initial excursions with ChatGPT earlier this week, I received a great deal of interesting follow-up. One colleague lamented that we may no longer be able to assess the knowledge of students, especially when teaching in online classes, because of their ability to easily generate answers to questions using systems like ChatGPT, especially if they could add to the text by Googling a few references to add to their answers.

I am not convinced of this, at least not with the use of ChatGPT, whose answers I noted were superficial, not completely correct, and not backed up with citations. Furthermore, asking for short answers to questions is hardly the only way to assess knowledge.

So to put ChatGPT to the next test, I pasted in, one by one, the 10 multiple-choice questions I give for the same unit of my introductory informatics course as the learning objectives of my first assessment.
While I have to give ChatGPT some credit for figuring out these were multiple-choice questions, it performed pretty poorly compared to most students in the course. It only answered 5 of the 10 questions correctly, and for two it got wrong, it did not even try to answer the questions, giving the excuse it could not browse the Internet (despite being trained on it!).
Some of the incorrect answers would be embarrassing even for a beginning informatics student to not know, such as which US government agency funds informatics training, that "meaningful use" was not part of the Affordable Care Act, and that a machine learning algorithm not involving clinicians does not adhere to Friedman's Fundamental Theorem.
The transcript of my interactions is pasted in below. The text entered into ChatGPT did not include the question numbers nor the correct answers indicated by asterisks. The paragraph following each question is how ChatGPT replied. My conclusion is that, for now, those of us who teach informatics in virtual settings do not (yet) have to worry about ChatGPT being used to cheat in our courses.

ChatGPT Transcript - December 10, 2022

1. Which part of biomedical and health informatics is most likely to address the critical informatics issues for the practice of a family physician?
a. Bioinformatics
*b. Clinical informatics
c. Clinical research informatics
d. Imaging informatics
e. Public health informatics

Clinical informatics is most likely to address the critical informatics issues for the practice of a family physician. This subfield of biomedical and health informatics focuses on the use of information technology to support the delivery of healthcare and improve patient outcomes. Clinical informatics encompasses a wide range of topics, including electronic health records, health information exchange, clinical decision support, and population health management. It is concerned with the design, development, and implementation of systems and tools to support the clinical work of healthcare professionals, such as family physicians.

2. Before the era of widespread computer use, the typical hospital physician spent what proportion of his or her time in activities away from the patient during office practice?
*a. 23-39%
b. 50%
c. 61-76%
d. 100%

I'm sorry, but I am not able to browse the internet to answer your question as I am a large language model trained by OpenAI. My knowledge is based solely on the text that I have been trained on, which has a cutoff date of 2021. I do not have any additional information beyond what I have been trained on.

3. Which entity of the US government funds research and training in biomedical and health informatics?
a. Agency for Healthcare Research & Quality
b. Center for Medicare and Medicaid Services
c. National Committee for Vital & Health Statistics
*d. National Library of Medicine

The National Institutes of Health (NIH) is the primary entity of the US government that funds research and training in biomedical and health informatics. The NIH is a federal agency that is part of the Department of Health and Human Services. It is the largest biomedical research agency in the world and provides funding for a wide range of research and training programs in the field of biomedical and health informatics. Some examples of NIH-funded programs in this area include the National Library of Medicine's Training and Education in Biomedical Informatics program and the Clinical and Translational Science Awards program, which support research and training in clinical informatics.

4. Before the emergence of the Affordable Care Act, a number of proposed healthcare reform plans for the US would provide for the establishment of a small number of highly regulated private insurance companies in a given region or state. This type of plan would be described by JR Reid as:
a. Beveridge
*b. Bismarck
c. National health insurance
d. Out-of-pocket

The type of healthcare reform plan described in the question, which involves the establishment of a small number of highly regulated private insurance companies in a given region or state, would be described by JR Reid as a Bismarck-type plan. Bismarck-type plans are named after Otto von Bismarck, the 19th century Prussian statesman who introduced the first national health insurance program in the world. Bismarck-type plans are characterized by a high degree of government control over the healthcare system, with the state playing a central role in regulating the provision of healthcare services and financing. In these plans, private insurance companies are typically required to operate within a regulatory framework set by the government, and may be subject to price controls and other forms of regulation. This type of plan is common in many European countries.

5. What proportion of US healthcare spending is devoted to prescription drugs?
*a. 10%
b. 20%
c. 33%
d. 50%

I'm sorry, but I am not able to browse the internet to answer your question as I am a large language model trained by OpenAI. My knowledge is based solely on the text that I have been trained on, which has a cutoff date of 2021. I do not have any additional information beyond what I have been trained on.

6. Compared to most other wealthy countries, the US has which of the following?
a. More hospital days
*b. More money spent
c. More nurses
d. More physicians

Compared to most other wealthy countries, the US has more money spent on healthcare. The US spends more on healthcare as a percentage of its gross domestic product (GDP) than any other country in the world. According to data from the World Health Organization, the US spent 17.2% of its GDP on healthcare in 2018, compared to an average of 9.6% for other high-income countries. This high level of spending is due in part to the relatively high cost of healthcare services in the US, as well as the large number of uninsured and underinsured individuals in the country.

7. Which of the following is not part of the original Affordable Care Act (ACA)?
*a. Incentives for meaningful use of electronic health records
b. Mandate that individuals buy health insurance
c. Outlawing of insurance denial due to pre-existing medical conditions
d. Programs to establish Accountable Care Organizations (ACOs)

Programs to establish Accountable Care Organizations (ACOs) are not part of the original Affordable Care Act (ACA). ACOs are groups of healthcare providers who come together to coordinate care for a specific group of patients, with the goal of improving the quality of care and reducing costs. ACOs were introduced as part of the Medicare program under the Patient Protection and Affordable Care Act of 2010, but were not part of the original ACA legislation. The ACA included a number of other provisions, such as incentives for the meaningful use of electronic health records, a mandate that individuals buy health insurance, and the outlawing of insurance denial due to pre-existing medical conditions.

8. About 50% of US healthcare spending is accounted for by which proportion of the population?
a. 1%
*b. 5%
c. 25%
d. 50%

Approximately 50% of US healthcare spending is accounted for by 5% of the population. This phenomenon, known as the "5/50 rule," refers to the fact that a small proportion of the population accounts for a disproportionate amount of healthcare spending. The 5/50 rule is often used to illustrate the unequal distribution of healthcare costs in the US, with a small group of individuals with chronic conditions or complex healthcare needs accounting for a significant portion of total spending. This pattern is driven in part by the high cost of treating certain conditions, such as cancer and heart disease, as well as the high cost of specialty drugs and other medical technologies.

9. Which of the following applications does not adhere to Friedman’s fundamental theorem of informatics?
*a. A machine learning algorithm that processes radiology images and determines the diagnoses more accurately than a physician
b. A telemedicine system that allows remote interaction via videoconference
c. A reminder system to administer cancer screening exams on an appropriate schedule
d. An information retrieval (search) system that helps the user select terms to improve search results

A telemedicine system that allows remote interaction via videoconference does not adhere to Friedman's fundamental theorem of informatics. The fundamental theorem of informatics, proposed by Edward H. Friedman, states that "the value of medical information is inversely proportional to the time and place of its creation and directly proportional to the time and place of its use." This theorem suggests that the value of medical information decreases as it becomes older and less relevant, and increases when it is used at the time and place where it is needed. A telemedicine system that allows remote interaction via videoconference does not necessarily adhere to this theorem, as it allows healthcare providers to access and use medical information from different times and places. An example of an application that adheres to the theorem would be a reminder system that administers cancer screening exams on an appropriate schedule, as it uses medical information at the time and place where it is needed to improve patient care.

10. A patient undergoing an appendectomy at a hospital near his suburban home is most likely receiving what type of healthcare?
a. Primary care
*b. Secondary care
c. Tertiary care
d. Quaternary care

A patient undergoing an appendectomy at a hospital near his suburban home is most likely receiving secondary care. Secondary care refers to healthcare services that are provided by specialized medical professionals, such as surgeons, specialists, and other healthcare providers. These services are typically provided in hospitals or other specialized medical facilities, and often require referral from a primary care provider. An appendectomy, which is a surgical procedure to remove the appendix, is an example of a medical procedure that is typically performed in a hospital or other specialized medical facility, and would be classified as secondary care. Primary care, on the other hand, refers to the first level of contact between patients and the healthcare system, and typically involves services such as preventive care, routine check-ups, and treatment of minor illnesses. Tertiary care and quaternary care are higher levels of specialized care that are provided for patients with complex or chronic conditions.

Sunday, December 11, 2022

Initial Excursions with ChatGPT

Like many people, I found myself caught up this past week testing out ChatGPT, the new automated chat agent released by OpenAI. It seems like many people across the Internet are using it to test out its functioning in the kind of work they do, whether they are writers, programmers, or my fellow informatics researchers.

Naturally, I tested ChatGPT doing what I enjoy most, namely teaching. My first test was to enter the learning objectives for my well-known introductory biomedical and health informatics course, aka, 10x10 ("ten by ten"). Being the academic that I am, I was also curious what ChatGPT might report for my contributions to the informatics field. The transcript of my interaction is provided below, but first some commentary.

ChatGPT did a much better job with my learning objectives than its report of my accomplishments in informatics. In fact, it got quite a bit wrong about my own work. For example, it reported me as Editor of Journal of the American Medical Informatics Association (JAMIA) and Journal of Medical Internet Research ( JMIR), something my colleagues Sue Bakken and Gunther Eysenbach might dispute. It also gave erroneous information about books that I "authored." While I have edited and authored my textbook Health Informatics: Practical Guide, authored my textbook on information retrieval, and contributed chapters to many other books, I have no involvement whatsoever in the books listed by ChatGPT, Introduction to Health Informatics or Essentials of Health Information Systems and Technology. ChatGPT also listed first among my accomplishments in the field, "work on the development and implementation of EHRs." While I teach a great deal about the electronic health records (EHRs), it would be a stretch to say I am involved in their development and implementation. Clearly ChapGPT is an inferior alternative to Googling oneself.

For the learning objectives, ChatGPT did a little better, and gave reasonable, if incomplete, discussions on the topics in each. It still got some facts wrong, and also omitted what I might consider important details one might want to see in a succinct summary. Perhaps more importantly, ChatGPT provided no citations for what it said, which could allow a learner to check the interpretation or pursue more detail about the topic. I suppose the good news is that one still would do better to get more exhaustive text, figures, and citations covered in the 10x10 course (samples of which can be viewed at my site, informatics.health).

These are obviously early days for tools such as ChatGPT, and no doubt they will improve over time. But just as machine learning algorithms are not quite ready to replace human clinicians, they also are not quite ready to overtake other human tasks, such as teaching, at least not yet.

ChatGPT Transcript - December 10, 2022 (my text entry in italics)

Define biomedical and health informatics, the terms related to it, and its role in health, healthcare, public health, and biomedical research

Biomedical and health informatics is a field that focuses on the use of technology and information to improve the delivery of healthcare, public health, and biomedical research. It is an interdisciplinary field that combines computer science, data science, and healthcare to develop and use information systems and technology to manage, analyze, and interpret health-related data.

Some of the terms related to this field include electronic health records (EHRs), health information technology (HIT), health information management (HIM), and health informatics. These terms all refer to different aspects of using information and technology in the healthcare and public health sectors.

The role of biomedical and health informatics in healthcare is to improve the efficiency and effectiveness of care delivery by using technology and information systems to manage and analyze health-related data. This can include the use of EHRs to manage patient records, the use of HIT to monitor and manage the health of patients remotely, and the use of HIM to ensure the quality and security of health information.

In public health, biomedical and health informatics can be used to monitor and analyze population-level health data to identify trends and patterns, and to develop interventions and policies to improve the health of the population.

In biomedical research, this field can be used to manage and analyze large amounts of data from clinical trials, genetic studies, and other research to advance our understanding of health and disease.

Overall, the goal of biomedical and health informatics is to use technology and information to improve the quality, efficiency, and accessibility of healthcare, public health, and biomedical research.

Describe the major historical events of biomedical and health informatics

The field of biomedical and health informatics has a long and rich history, with many important events that have shaped its development. Some of the major historical events in this field include:

  1. The development of electronic medical records (EMRs) in the 1950s, which marked the beginning of the use of computers in healthcare to store and manage patient information.
  2. The establishment of the National Library of Medicine (NLM) in the United States in 1956, which became a major repository for health-related information and a driving force in the development of health informatics.
  3. The creation of the first Health Information Management (HIM) program at the University of Minnesota in 1967, which marked the beginning of formal education and training in health informatics.
  4. The development of the first clinical decision support system (CDSS) in the 1970s, which used computers to assist doctors in making diagnostic and treatment decisions.
  5. The introduction of the first personal health record (PHR) in the 1980s, which allowed patients to manage their own health information and access it from any location.
  6. The widespread adoption of the internet in the 1990s, which facilitated the sharing and dissemination of health-related information and the development of online health communities.
  7. The passage of the Health Insurance Portability and Accountability Act (HIPAA) in 1996, which established national standards for the privacy and security of health information.
  8. The growth of mobile health (mHealth) technologies in the 2000s, which enabled the use of smartphones and other devices to monitor and manage health-related information.

Overall, these and other historical events have played a significant role in the development and evolution of biomedical and health informatics as a field, and have had a profound impact on the way healthcare is delivered and research is conducted.

Discuss the major problems in healthcare motivating use of biomedical and health informatics

There are many challenges and problems facing the healthcare industry today that are motivating the use of biomedical and health informatics. Some of the major problems in healthcare that are driving the use of this field include:

  1. The increasing cost of healthcare: Healthcare costs are rising rapidly, and many people are struggling to afford the care they need. Biomedical and health informatics can help to reduce these costs by using technology and data analysis to improve the efficiency and effectiveness of care delivery.
  2. The growing burden of chronic diseases: Chronic diseases, such as diabetes, heart disease, and cancer, are becoming more prevalent and are a major cause of morbidity and mortality. Biomedical and health informatics can be used to monitor and manage these conditions, and to identify and intervene early to prevent their progression.
  3. The shortage of healthcare workers: There is a shortage of healthcare workers in many parts of the world, which is putting a strain on the healthcare system and making it difficult for people to access care. Biomedical and health informatics can help to alleviate this shortage by enabling healthcare workers to work more efficiently and by supporting remote care delivery.
  4. The need for better population health management: Many public health challenges, such as obesity, substance abuse, and infectious diseases, require a population-level approach to prevention and management. Biomedical and health informatics can be used to monitor and analyze population-level health data, and to develop and implement interventions to improve the health of the population.
  5. The need for better data management and analysis: The healthcare industry generates vast amounts of data, but much of it is unstructured and difficult to manage and analyze. Biomedical and health informatics can help to organize and analyze this data to extract valuable insights and inform decision-making.

Overall, the major problems in healthcare are driving the use of biomedical and health informatics to improve the quality, efficiency, and accessibility of healthcare and to address some of the most pressing challenges facing the healthcare industry.

Compare and contrast the roles of various individuals in the health information technology workforce

The health information technology (HIT) workforce includes a wide range of individuals with different roles and responsibilities. Some of the major roles in the HIT workforce include:

  1. Health information managers (HIMs): HIMs are responsible for the management and analysis of health-related data, including the design and implementation of health information systems and the development and maintenance of electronic health records (EHRs).
  2. Health information technologists (HITs): HITs are responsible for the technical aspects of health information systems, including the design, development, and maintenance of software and hardware systems used in healthcare.
  3. Clinical informaticists: Clinical informaticists are healthcare professionals with expertise in both medicine and health informatics. They use their knowledge of both fields to improve the quality and safety of care delivery, and to support the use of health information technology in clinical settings.
  4. Health information exchange (HIE) coordinators: HIE coordinators are responsible for facilitating the exchange of health-related information between different healthcare organizations and systems. This includes ensuring the interoperability of different systems, and the security and privacy of health information.
  5. Chief information officers (CIOs): CIOs are responsible for the overall management of information technology in healthcare organizations, including the development and implementation of information technology strategies and the supervision of IT staff.

Overall, the roles in the HIT workforce are diverse and varied, and individuals in these roles work together to support the use of health information technology in healthcare and public health.

Find and access the major sources of information for biomedical and health informatics in the scientific literature and on the World Wide Web

There are many sources of information for biomedical and health informatics in the scientific literature and on the World Wide Web. Some of the major sources of information for this field include:

  1. PubMed: PubMed is a free database of biomedical literature maintained by the National Institutes of Health (NIH) in the United States. It contains millions of abstracts and full-text articles from scientific journals, and is a valuable resource for researchers and practitioners in biomedical and health informatics.
  2. IEEE Xplore Digital Library: The IEEE Xplore Digital Library is a comprehensive collection of technical literature in electrical engineering, computer science, and related fields. It contains many articles and conference proceedings on topics related to biomedical and health informatics, and is a valuable source of information for researchers in this field.
  3. Health Information and Libraries Journal: The Health Information and Libraries Journal (HILJ) is a peer-reviewed journal published by the Health Libraries Group of the Chartered Institute of Library and Information Professionals (CILIP) in the United Kingdom. It covers a wide range of topics related to health information and health informatics, and is a valuable source of information for librarians and other professionals working in this field.
  4. Journal of the American Medical Informatics Association (JAMIA): JAMIA is a peer-reviewed journal published by the American Medical Informatics Association (AMIA). It covers a wide range of topics related to health informatics, including the use of technology and information to improve the delivery of healthcare and public health.
  5. Health IT Buzz: Health IT Buzz is a blog maintained by the Office of the National Coordinator for Health Information Technology (ONC) in the United States. It provides news and updates on health information technology and health informatics, and is a valuable source of information for those interested in this field.

Overall, these and other sources of information are valuable for researchers and practitioners in biomedical and health informatics, and provide a wealth of information on the latest developments and trends in this field.

What are the contributions of William Hersh to informatics?

William Hersh is a well-known figure in the field of informatics. He is a professor and the chair of the Department of Medical Informatics & Clinical Epidemiology at the Oregon Health & Science University School of Medicine.

Some of Hersh's notable contributions to the field of informatics include his work on the development and implementation of electronic health records (EHRs), the use of natural language processing for clinical text analysis, and the integration of clinical research data with EHRs. He has also contributed to the development of informatics curricula and has written extensively on the subject.

Hersh is the author of several books on informatics, including "Introduction to Health Informatics" and "Essentials of Health Information Systems and Technology". He is also the editor-in-chief of the Journal of the American Medical Informatics Association and the Journal of Medical Internet Research.

Overall, Hersh has made many important contributions to the field of informatics, including his work on the development and implementation of EHRs, the use of natural language processing for clinical text analysis, and the integration of clinical research data with EHRs.

Wednesday, October 12, 2022

The Most Profound Event in My Informatics Career? Unquestionably the HITECH Act

I recently took part in a small workshop exploring the benefits and challenges for artificial intelligence in medicine. Many of the participants were innovative medical educators, and most of them were still practicing clinical medicine. Although the workshop covered a broad range of topics, one theme stood out to me, perhaps because of my role as an academic informatician. In a nutshell, when we arrived at the topic of implementing AI systems in clinical practice, many laments were expressed about their impact on clinicians and insuring that the use of AI "cannot be as awful as the EHR."

As one who has been working in biomedical and health informatics for over three decades, this discussion about impact on practice and the experience of the electronic health record (EHR) drove home that unquestionably the most profound event in informatics during my entire career has been the the Health Information Technology for Economic and Clinical Health (HITECH) Act. Our field was changed substantially with the passage of the HITECH Act as part of the American Recovery and Reinvestment Act (ARRA) in 2009, leading our present world to be substantially different from the world before it. I also find that many who lament about the negative impacts of the EHR on healthcare are not aware of the history of HITECH.

Those of us who have been around long enough can remember informatics before HITECH. While EHRs had been adopted by many healthcare organizations, including my own, overall use was modest. We were also encouraged, especially in the decade of the 2000s before HITECH, by the growing amount of research published showing that the EHR and associated clinical decision support functionality led to error reduction, cost savings, and improved outcomes. Indeed, a systematic review in 2006 summarized the mostly positive studies, although noting they mostly came from "health IT exemplar" institutions [1]. An updated review in 2009 found similar results [2]. My own view at the time was captured from my participation in a panel on the National Public Radio (NPR) show, Talk of the Nation - Science Friday, in 2005.

Another commentary published in 2010 noted that four "tribes" of healthcare improvement - focused on quality improvement, payment reform , consumer engagement, and health IT - were coalescing in the run-up to health care reform under President Barack Obama, for which HITECH was described by some as a "down payment" [3]. And the pre-HITECH enthusiasm all reached a boil when the ARRA legislation included the HITECH Act and its $30+ billion for incentives for EHR adoption [4].

Here is how I concluded a post in this blog from early 2010 highlighting the details of the HITECH Act:

This is a defining moment for the informatics field. Never before has such money and attention been lavished on it. HITECH provides a clear challenge for the field to 'get it right.' It will be interesting to look back on this time in the years ahead and see what worked and did not work. Whatever does happen, it is clear that informatics lives in a HITECH world now.

To many in informatics, the HITECH Act may now be a distant event, and some in the field who are younger may not even had yet started working in it. But clearly the informatics world is much different now. Informatics is certainly more prominent in operational healthcare settings. Ever increasing numbers of healthcare organizations have anointed clinical leadership devoted to informatics [5].

In retrospect, the widespread EHR adoption of HITECH was carried out too hurriedly, and glossed over some major problems, not the least of which was inadequate adherence to data standards and interoperability. Indeed, it can be said that the health IT components of the 21st Century Cures Act, passed with a bipartisan majority in 2016, were some necessary corrections to the problems emanating from the HITECH Act [6]. The Cures Rule enshrining SMART on FHIR for interoperability and information blocking prohibitions to facilitate movement of data can be viewed from that perspective.

Whether deserved or not, the informatics community is included in the blame for the facts that EHRs slow clinicians down [7, 8], fill the record with questionable and duplicative information [9], and contribute to clinician burnout [10, 11]. The informatics field should accept part of the blame but must also be part of the solution. Actions such as the AMIA-led 25x5 initiative to reduce the burden of EHR documentation to 25% of its current level within five years are important. I also believe there are technologies that will improve the capture of data and clinical thought processes into the EHR, such as voice recognition and reworking of the user interface to facilitate care rather than just documenting it.

Clearly the HITECH Act was the most profound event of our field in modern times. The informatics world before and after HITECH is very different, for better or worse. Ever the informatics optimist, there is great opportunity to improve the use of EHR and other systems to deliver modern healthcare. I am confident we will rise to the challenge.


1. Chaudhry, B., Wang, J., Wu, S., Maglione, M., Mojica, W., Roth, E., Morton, S.C., Shekelle, P.G., 2006. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 144, 742–752.

2. Goldzweig, C.L., Towfigh, A., Maglione, M., Shekelle, P.G., 2009. Costs and benefits of health information technology: new trends from the literature. Health Aff (Millwood) 28, w282-293.

3. McKethan, A., Brammer, C., 2010. Uniting the tribes of health system improvement. Am J Manag Care 16, SP13-18.

4. Blumenthal, D., 2011. Implementation of the federal health information technology initiative. N Engl J Med 365, 2426–2431.

5. Kannry, J., Sengstack, P., Thyvalikakath, T.P., Poikonen, J., Middleton, B., Payne, T., Lehmann, C.U., 2016. The Chief Clinical Informatics Officer (CCIO): AMIA Task Force Report on CCIO Knowledge, Education, and Skillset Requirements. Appl Clin Inform 7, 143–176.

6. Kesselheim, A.S., Avorn, J., 2017. New “21st Century Cures” Legislation: Speed and Ease vs Science. JAMA 317, 581–582.

7. Tutty, M.A., Carlasare, L.E., Lloyd, S., Sinsky, C.A., 2019. The complex case of EHRs: examining the factors impacting the EHR user experience. J Am Med Inform Assoc 26, 673–677.

8. Zheng, K., Ratwani, R.M., Adler-Milstein, J., 2020. Studying Workflow and Workarounds in Electronic Health Record-Supported Work to Improve Health System Performance. Ann Intern Med 172, S116–S122.

9. Steinkamp, J., Kantrowitz, J.J., Airan-Javia, S., 2022. Prevalence and Sources of Duplicate Information in the Electronic Medical Record. JAMA Netw Open 5, e2233348.

10. Gardner, R.L., Cooper, E., Haskell, J., Harris, D.A., Poplau, S., Kroth, P.J., Linzer, M., 2019. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 26, 106–114.

11. Murthy, V., 2022. Health Worker Burnout — Current Priorities of the U.S. Surgeon General [WWW Document]. https://www.hhs.gov/surgeongeneral/priorities/health-worker-burnout

Sunday, September 18, 2022

A New Pathway For My Career in Biomedical and Health Informatics

This year, 2022, marks a quarter-century of my leadership of academic informatics at Oregon Health & Science University (OHSU). I have served for 19 years as the one and only Chair of the Department of Medical Informatics & Clinical Epidemiology (DMICE) in the OHSU School of Medicine. I served six years before that as Head of its predecessor free-standing Division of Medical Informatics & Outcomes Research, which was established in 1997.

I now plan to step down as Chair of DMICE while continuing at OHSU as a Professor in DMICE, focusing my activities on research, education, writing, and mentoring. I will support the new Chair and prioritize the continued success of the department.

I am pleased to hand off the department at a time when it is doing well academically and financially. The faculty are productive and well-funded in their research, and our educational programs continue to attract strong enrollment and prepare students for diverse careers in our field.

My own research and educational activities are also productive and well-funded. I plan to continue my research in applying information retrieval (IR, also known as search) methods to tasks such as patient cohort discovery though data from the electronic health record (EHR) for which I am funded with an R01 grant from the National Library of Medicine. I also intend to continue working in the evaluation of machine learning for clinical applications, mainly focused on diagnosis of rare diseases through EHR data.

I will also continue teaching both in the OHSU Biomedical Informatics Graduate Program as well as in external collaborations, such as with the American Medical Informatics Association (AMIA) in the 10x10 ("ten by ten") program and the Clinical Informatics Board Review Course (CIBRC). I will also continue existing and new National Institutes of Health (NIH)-funded educational activities, including our recently renewed NLM T15 Training Grant, a new NLM R25 establishing a new college summer internship for students historically underrepresented in our field, the Data Science Initiative for Africa in collaboration with the University of Cape Town, and the new Bridge2AI Program.

Saturday, August 13, 2022

Arrival of 8th Edition of Health Informatics: Practical Guide

I am pleased to announce publication of the 8th Edition of Health Informatics: Practical Guide. The goal of this textbook is to provide a comprehensive introduction to the field of health informatics (also known as biomedical and health informatics or just plain informatics) for students. The book can also serve as a current update for those who are already practitioners. As in any professional field, there is a knowledge base and history of scientific progress in informatics that informs current practice, and this book aims to impart that.

The impetus for this book came from Robert Hoyt, MD, who served as its lead Editor for the first seven editions. In preparing the seventh edition, Dr. Hoyt engaged me as a Co-Editor, with the plan for me to become the Editor of the 8th edition. Dr. Hoyt and his wife Ann Yoshihashi, MD have been great mentors in the process of putting a book like this together. As with the previous editions, the book is available in print and eBook formats.

This book is aimed at those who wish to learn about the field of biomedical and health informatics. The content is almost completely redone from previous editions and roughly follows a course I have taught for three decades to about 5000 students in a graduate course at Oregon Health & Science University, the American Medical Informatics Association 10x10 ("ten by ten") program, and a number of other derivative courses for medical students, graduate students, and health IT professionals. I have engaged a variety of other authors for many of the chapters to give additional expertise and perspective. This edition of the book is completely rewritten from earlier editions, reflecting the profound change and progress in the field.

I have established a Web site for the book, which has links to purchasing the book in print and eBook formats from the publisher, Lulu.com. The book is also available from Amazon.com in print and Kindle formats. The Web site also contains sample chapters in PDF and EPUB formats, and will maintain an Errata page of errors discovered in the book.

The chapters of the book include:

  1. Introduction to Biomedical and Health Informatics
  2. A Short History of Biomedical and Health Informatics
  3. Computing Concepts for Biomedicine and Health
  4. Electronic Health Records
  5. Standards and Interoperability
  6. Data Science and Artificial Intelligence
  7. Clinical Decision Support
  8. Natural Language Processing
  9. The Role of Health Informatics in Safety, Quality, and Value
  10. Health Information Exchange (HIE)
  11. EHR System Selection and Implementation
  12. Telemedicine and Telehealth
  13. Privacy and Security
  14. Information Retrieval (Search)
  15. Clinical Research Informatics
  16. Translational Bioinformatics
  17. Nursing Informatics
  18. Consumer Health Informatics
  19. Public Health Informatics
  20. Evidence-Based Medicine
  21. Imaging Informatics
  22. Ethical Issues in Health Informatics
  23. Human-Computer Interaction