Wednesday, February 7, 2024

Translational AI: A Necessity and Opportunity for Biomedical Informatics and Data Science

How much of the hype for artificial intelligence (AI) that will truly impact health, healthcare, and research is an unknown. The potential benefits are unequivocal, from assistant patients pursue actions to improve their health to giving guidance to clinicians in diagnosis and treatment to helping researchers find information and devise new ideas to advance their research.

I have published an invited post in the National Library of Medicine (NLM) Musings from the Mezzanine Blog, the blog of the Director of the NLM. I chose to update some of my past writings posted in this blog with a new discussion of what I call translational AI.

The tl;dr is:

  • The actual day-to-day use of clinical AI in healthcare is still modest, according to surveys.
  • While there are thousands of machine learning model papers that are published, and many systematic reviews of those model papers, there are a much small number, probably on the order of 100, randomized controlled trials (RCTs) of AI interventions in healthcare.
  • Of those RCTs, not all have resulted in positive outcomes and a number of them have risk of bias concerns.

Clearly, as in all of healthcare, we cannot do RCTs on every permutation of model, implementation, setting, etc. of AI. However, we must treat AI the same way as any other tool we use in healthcare: Show us the evidence. Granted, evaluating the use of AI has plenty of differences from evaluating other interventions used in patient care, such as drugs and devices. It is difficult to conure a “placebo” for AI, and hard to perform controlled studies when AI, such as ChatGPT, is all around us.

Nonetheless, we can apply evidence-based medicine (EBM) to help inform its clinical use. The ideal way to do that is through randomized controlled trials (RCTs), or ideally systematic reviews of RCTs. As I note in the post, this is imperative not only for those of us who promote the use of AI and other biomedical and health informatics interventions, but also for students and trainees looking for projects to develop impactful research programs in their careers.

Tuesday, January 30, 2024

Whither Search? A New Perspective on the Impact of Generative AI on Information Retrieval (IR)

When I was putting the finishing touches on the 4th edition of my textbook on information retrieval (IR, also known as search) in the domain on biomedicine and health in 2020, I wondered whether the major problems in the field of IR were mostly solved. Retrieval systems such as Google for general Web searching and PubMed for the biomedical literature were robust and mature. One literally had the world’s written knowledge at their fingertips for general and biomedical topics from these systems respectively (even if paywalls did not always allow immediate access to the content).

There were certainly some areas of IR where additional work was needed and important, e.g., search over specific types of content such as social media or, in the case of my own research, electronic health record (EHR) data and text. There were also some nascent advances in the application of machine learning, although the gains in experimental results were more incremental than transformative.

But any staidness of IR was upended by the emergence of generally available generative artificial intelligence (AI) chatbots, based on large language models (LLMs), initially with ChatGPT and soon others to follow. Shortly thereafter came generative AI capabilities added to the two major Web search engines, Microsoft Bing and Google. All of a sudden, searching the Web was transformed in ways that most of us did not see coming.

I recently took advantage of the call for papers for a special issue devoted to ChatGPT and LLMs in biomedicine and health of the flagship journal for the field of informatics, JAMIA, to write a perspective piece on why search is still important, even in the era of generative AI. At least for me, while the answer to my question is important in a search, it is also critical to know where the information came from. In addition, as I am commonly synthesizing my own knowledge and views on a topic, I do not just want a single generative AI answer to my question but rather the source articles and documents so I can compare and contrast different views and develop my own answer.

At the close of the paper, I do acknowledge that there may well be areas of IR where generative AI may have major impact going forward. I know that there is a lot of buzz around retrieval-augmented generation (RAG), although for many of the questions on which I search, I am much more interested in generation-augmented retrieval (GAR?). That is, how can generative AI methods improve the way we search to steer us to the kinds of authoritative, originally sourced information we seek to carry out our work?

The day before the article was published, a reporter who came across my preprint wrote a piece on the impact of AI on search, noting some of the issues I raise with regards to accuracy and authority for search in fields like medicine and in academia.

The paper itself has been published in JAMIA as an Advance Article, Hersh W, Search still matters: information retrieval in the era of generative AI, Journal of the American Medical Informatics Association, 2024, ocae014. Unfortunately, the open-access publishing fee for JAMIA is fairly steep ($4125), especially for a short perspective piece like this, but those wanting to read it can access the preprint that I posted.

Sunday, December 31, 2023

Annual Reflections at the End of 2023

Keeping with my tradition for this blog, I am ending this calendar year with my retrospective on the past year and some musings looking forward. The number of postings I make to this blog continues to decline each year. It is not that I do not have a lot to say, but rather I do not seem to have as much time to say it.

One thought has been to move to a different medium, such as a podcast. While I do listen to a fair number of podcasts, I still myself seem to prefer my words to be written. If nothing else, text seems more accessible. Another option would be to switch to something like a Substack newsletter, and I subscribe to several of those, but I have no interest in monetizing this blog, so Blogspot still seems like the easiest way to go.

In any case, the year 2023 was a transitional year for me. After serving as inaugural leader of both the Oregon Health & Science University (OHSU) Department of Medical Informatics & Clinical Epidemiology (DMICE) and the OHSU Biomedical Informatics Graduate Program for a couple decades, I decided to step back from both leadership roles. With both entities in good shape, but needing some reinvigoration, not to mention institutional reinvestment, the time seemed right to step back. There is good interim leadership, and my hope is that OHSU will choose to recruit strong permanent leadership in the year ahead.

Despite stepping back from leadership, I am still busy and gratified with my work. I have had more time to devote to research and teaching. My research program in information retrieval (IR, also known as search) continues to be productive and well-funded, with several grants that will last through at least the next few years. Likewise, my teaching activity, mainly centered around my introductory course in biomedical and health informatics that is offered as a graduate course, continuing education course (the famous "10x10"), and as a medical student course, also keeps me busy.

A former Dean-then-President of OHSU reminded me years ago that when you step above the department chair role in academic medicine, you mostly leave behind your academic work. It became clear to me this year that my passion and enthusiasm lie more in the intellectual work of informatics and less in building new programs. I did plenty in the latter over the years but now hope to focus the rest of my career on the intellectual work of the field.

And there is much new intellectual work in informatics and related areas to keep us busy. Another big transition for 2023 was the emergence of generative artificial intelligence (AI). Although generative AI had been around for a half-decade or so, a real turning point was the November 30, 2022 release of ChatGPT. There is clearly as much hype as there is reality, but there is no question its impact in the long run will be profound.

Generative AI impacts me in many ways. Clearly in education, it has caused teachers at all levels to reassess assignments and assessments of students. Almost everything in the assignments that I give students can be aided with ChatGPT. I will be writing more about this in the months ahead, but I agree with those who advocate we incorporate generative AI and not fight it or pretend it does not exist. It will likely require a combination of policy and students adhering to honor codes about how they do their work.

My research in the area of IR is also impacted by generative AI. With the 2020 publication of the fourth edition of my textbook in the field, I was beginning to wonder how much more research that IR needed. Clearly search is ubiquitous and a vital part of any intellectual work. While generative AI has large impacts for how we search for information, especially answers to questions, I note in a paper currently in preprint form and hopefully accepted for publication soon, there is still a role for seeking sources of knowledge and knowing the authority of those who produce those sources.

One subject I wrote a great deal about in recent years but has faded some from the forefront is COVID-19. This virus is still with us and is likely to continue to be so for much time to come. And the other impacts caused by the pandemic, namely the continuation of mostly remote work, may never revert to as it was before the pandemic. I am ambivalent about work that is mostly remote, even though I have been working that way for many years, especially with my travel. But I do appreciate the convenience of my morning commute being a walk down the stairs to my basement home office, although I also miss being around other people when working. Hopefully we will find a better balance in the future.

Heading into 2024, I do look forward to continuing this blog, and hopefully writing more. I also look forward to advances in the field of biomedical and health informatics and continuing to be a part of them.

Friday, September 15, 2023

Physician and Medical Student Competence in AI Must Include Broader Competence in Clinical Informatics

A number of interesting papers on physician competencies in artificial intelligence (AI) have been published recently, and there is no question that all 21st century healthcare professions must have a thorough understanding of the benefits and limitations of AI that they are likely to use in their clinical work.(1-5)

One of my concerns, however, is that the focus on AI and not the larger issues of clinical informatics risks undermining not only a full understanding of the impact of AI, but also most of the other informatics-related knowledge and skills that are important to clinical practice. These include skills in using the electronic health record (EHR); accessing clinical knowledge using search systems; being facile with clinical decision support and health information exchange; protecting privacy and security, engaging patients, their data, and their devices; and applying data in tasks such as population health, public health, and clinical and translational research. At a minimum, these competencies provide a foundation of applying data, information, and knowledge to improve human health and healthcare delivery, but they also inform the application of AI in biomedicine and health.

About a decade ago, some colleagues and I published a paper outlining what we believed were the required competencies in clinical informatics in 21st century practice.(6) These competencies were then used to develop a curriculum in clinical informatics for our medical students.(7) While AI is now a prominent part of biomedicine and health, and a good deal more in society, the initial competencies have, in my opinion, stood the test of time.

There were originally 13 competencies in the list. In 2020, it became apparent that we needed to add an additional competency in machine learning, and in a textbook chapter (8) and blog post, we added that as a 14th competency. Now of course, it is probably better to use AI explicitly in that competency. As such, I present a new version of the list of competencies in clinical informatics for medical education, which of course applies to all health professions students and practitioners. (Textual version in the Appendix below.)

Competencies

References

  1. Ötleş, E., James, C.A., Lomis, K.D., Woolliscroft, J.O., 2022. Teaching artificial intelligence as a fundamental toolset of medicine. Cell Rep Med 3, 100824.
  2. James, C.A., Wachter, R.M., Woolliscroft, J.O., 2022. Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence. JAMA 327, 1333–1334.
  3. Russell, R.G., Lovett Novak, L., Patel, M., Garvey, K.V., Craig, K.J.T., Jackson, G.P., Moore, D., Miller, B.M., 2023. Competencies for the Use of Artificial Intelligence-Based Tools by Health Care Professionals. Acad Med 98, 348–356.
  4. Liaw, W., Kueper, J.K., Lin, S., Bazemore, A., Kakadiaris, I., 2022. Competencies for the Use of Artificial Intelligence in Primary Care. Ann Fam Med 20, 559–563.
  5. Goodman, K.E., Rodman, A.M., Morgan, D.J., 2023. Preparing Physicians for the Clinical Algorithm Era. N Engl J Med.
  6. Hersh, W.R., Gorman, P.N., Biagioli, F.E., Mohan, V., Gold, J.A., Mejicano, G.C., 2014. Beyond information retrieval and electronic health record use: competencies in clinical informatics for medical education. Adv Med Educ Pract 5, 205–212.
  7. Hersh, W., Biagioli, F., Scholl, G., Gold, J., Mohan, V., Kassakian, S., Kerns, S., Gorman, P., 2017. From Competencies to Competence: Model, Approach, and Lessons Learned from Implementing a Clinical Informatics Curriculum for Medical Students, in: Health Professionals’ Education in the Age of Clinical Information Systems, Mobile Computing and Social Networks. Elsevier, pp. 269–287.
  8. Hersh, W., Ehrenfeld, J., 2020. Clinical Informatics, in: Health Systems Science, 2nd Edition. pp. 156–170.
Appendix - Competencies in Clinical Informatics for Health Professions Education (textual form)
  1. Find, search, and apply knowledge-based information to patient care and other clinical tasks
  2. Effectively read from, and write to, the electronic health record (EHR) for patient care and other clinical activities
  3. Use and guide implementation of clinical decision support (CDS)
  4. Provide care using population health management approaches
  5. Protect patient privacy and security
  6. Use information technology to improve patient safety
  7. Engage in quality measurement selection and improvement
  8. Use health information exchange (HIE) to identify and access patient information across clinical settings
  9. Engage patients to improve their health and care delivery though personal health records and patient portals
  10. Maintain professionalism in use of information technology tools, including social media
  11. Provide clinical care via telemedicine and refer patients as indicated
  12. Apply personalized/precision medicine
  13. Participate in practice-based clinical and translational research
  14. Use and critique artificial intelligence (AI) applications in clinical care

Wednesday, September 6, 2023

More Evidence That We Need More Evidence for AI Interventions

In a previous post, I related the case of an excellent model that predicted hospital readmission yet when used in the context of real-world effort to reduce admissions was not able to lower the rate.

Some new studies highlight this scenario again of excellent models and systems that, when studied, do not show real-world benefit. A couple papers in Annals of Internal Medicine find a similar scenario for one of the earliest uses of artificial intelligence (AI) to demonstrate success, which is computer-aided detection (CADe) of polyps during colonoscopy results. A systematic review of previous clinical trials found that while there was an increased in detection of pre-cancerous adenomas but not of advanced adenomas and in higher rates of unnecessary removal of non-neoplastic polyps.[1]

The journal also featured a new randomized controlled trial (RCT) that showed no significant difference in advanced colorectal neoplasia detection rate (34.8% with intervention vs. 34.6% for controls) or mean number of advanced colorectal neoplasias detected per colonoscopy.[2]

An accompanying editorial notes the challenges in implementing AI in real world, which may impact RCT results, but we must build evidence base to support use.[3]

On a different clinical topic of predicting future trajectories in estimated glomerular filtration rate (eGFR) in adults with type 2 diabetes and chronic kidney disease, a new study in JAMA Network Open found that the new model excels over previous models in more accurate estimation of risk earlier in the disease course.[4] However, an accompanying editorial notes that while this model provides more accuracy, the benefit to those in this phase of the disease might be outweighed by "inappropriate avoidance of intravenous contrast, patient anxiety, and unnecessary testing with its associated costs."[5] What is really needed, the author notes, are clinical trials to validate use of the model.

The research into these clinical applications of AI is important, and we must carry out the "basic science" research of them. But then we must move on to the next step of clinical application and studies that evaluate such systems in clinical trials or other appropriate evaluation methods.

References

1. Hassan, C., Spadaccini, M., Mori, Y., Foroutan, F., Facciorusso, A., Gkolfakis, P., Tziatzios, G., Triantafyllou, K., Antonelli, G., Khalaf, K., Rizkala, T., Vandvik, P.O., Fugazza, A., Rondonotti, E., Glissen-Brown, J.R., Kamba, S., Maida, M., Correale, L., Bhandari, P., Jover, R., Sharma, P., Rex, D.K., Repici, A., 2023. Real-Time Computer-Aided Detection of Colorectal Neoplasia During Colonoscopy : A Systematic Review and Meta-analysis. Ann Intern Med. https://doi.org/10.7326/M22-3678

2. Mangas-Sanjuan, C., de-Castro, L., Cubiella, J., Díez-Redondo, P., Suárez, A., Pellisé, M., Fernández, N., Zarraquiños, S., Núñez-Rodríguez, H., Álvarez-García, V., Ortiz, O., Sala-Miquel, N., Zapater, P., Jover, R., CADILLAC study investigators*, 2023. Role of Artificial Intelligence in Colonoscopy Detection of Advanced Neoplasias : A Randomized Trial. Ann Intern Med. https://doi.org/10.7326/M22-2619

3. Shung, D.L., 2023. From Tool to Team Member: A Second Set of Eyes for Polyp Detection. Ann Intern Med. https://doi.org/10.7326/M23-2022

4. Gregorich, M., Kammer, M., Heinzel, A., Böger, C., Eckardt, K.-U., Heerspink, H.L., Jung, B., Mayer, G., Meiselbach, H., Schmid, M., Schultheiss, U.T., Heinze, G., Oberbauer, R., BEAt-DKD Consortium, 2023. Development and Validation of a Prediction Model for Future Estimated Glomerular Filtration Rate in People With Type 2 Diabetes and Chronic Kidney Disease. JAMA Netw Open 6, e231870. https://doi.org/10.1001/jamanetworkopen.2023.1870

5. Sanghavi, S.F., 2023. Modeling Future Estimated Glomerular Filtration Rate in Patients With Diabetes-Are There Risks to Early Risk Stratification? JAMA Netw Open 6, e238652. https://doi.org/10.1001/jamanetworkopen.2023.8652

Monday, July 17, 2023

Lessons About Clinical AI From a Randomized Controlled Trial on Hospital Readmissions

Our news ands science feeds are so filled these days with everything artificial intelligence (AI), from large language models to their impacts on society, that we may miss important studies on other informatics topics, some of which may have lessons for AI. This is the case for a recent randomized controlled trial (RCT) on a hospital readmissions initiative (Donzé, 2023) and an accompanying editorial putting it in larger perspective.(Wachter, 2023)

Some may remember about a decade ago, when “data analytics” was the rage, and health insurance payors were noting with alarm the growing rate of hospital readmissions. The cost and frequency of readmissions was highlighted in a study finding as many as 20% of hospital admissions were readmissions within a month of previous discharge.(Jencks, 2009) Before this, several hospital discharge planning programs had been studied and found to reduce readmissions.(Naylor, 1994; Coleman, 2006; Jack, 2009) This situation led the US Centers for Medicare and Medicaid Services (CMS) to implement the Hospital Readmissions Reduction Program (HRRP) as part of the Affordable Care Act. Starting in 2012, the HRRP required public reporting of readmission rates for three common diseases: myocardial infarction, heart failure, and pneumonia, with hospitals having unusually high rates of readmissions being penalized.

Around the time that the HRRP was implemented, the Health Information Technology for Economic and Clinical Health (HITECH) Act was incentivizing the adoption of the electronic health record (EHR). This provided new unprecedented sources of data, and every predictive analyst set out to find models that used EHR data to predict patients most likely to be readmitted, with the goal of identifying those who could presumably be followed more closely and have readmissions averted. Numerous studies were published using models based on EHR data to predict patients at risk for readmission.(Amarasingham, 2010; Donzé, 2013; Gildersleeve, 2013; Shadmi, 2015)

Despite the plethora of predictive models, few interventions have actually been undertaken that demonstrate improved outcomes of care. One study found that the use of a readmission risk tool intervention reduced risk of readmission for patients with congestive heart failure but not those with acute myocardial infarction or pneumonia (Amarasingham, 2013) Another observational study found that readmissions did decline with the implementation of the new rule initially and for the targeted diseases, but less so for other conditions.(Zuckerman 2016) Other have noted that the program has had marginal benefit (Ody, 2019) and redirecting resources that might be devoted to other quality improvement efforts.(Cram, 2022)

Earlier this year, an RCT was published that assessed a multimodal care intervention aimed at reducing readmissions (Donzé, 2023). Carried out in four medium-to-large teaching hospitals in Switzerland, the study implemented the best-known predictive model for risk of readmission yet found no benefit for an intervention that included it. As noted in the accompanying editorial, just because we can predict something does not mean we can necessarily do something about it.(Wachter, 2023)

Why is this RCT pertinent to AI? Mainly because just being able to predict diagnoses or outcomes is not enough. I have written about this myself in this blog over the years. Whether we are talking about predictive analytics, next-generation data science, or AI, no matter how sophisticated our models or compelling our predictive abilities, we must demonstrate how these systems impact outcomes, whether improved patient health or healthcare system processes.

How do we demonstrate the value of AI in health and healthcare? First, we must implement these systems in the real world. There is a great deal being written about the promise and challenges for implementing AI in clinical settings.(Hightower, 2023) But even implementing AI in the real world is not enough. We must also demonstrate that AI can lead to better outcomes, whether improved health or treatment of disease of patients or better delivery of healthcare services. One perspective to think about this is the continuum of translational research. As with all biomedical advances, we start with the basic science, and demonstrating value in the laboratory, which in this case is the use of curated data sets. The next step is to implement systems in real-world healthcare or community settings. Clearly these are complex interventions.

Ultimately, however, we must demonstrate experimentally that health or healthcare is improved by the AI intervention. The best experimental evidence comes from controlled experiments, ideally RCTs. And granted, such trials may be more complicated than the classic RCT of comparing a medication versus a placebo. These RCTs may involve complex designs, and results may be difficult to interpret if the trial does not show benefit. But building the evidence base for AI is essential, and studies like this from Donzé et al. demonstrate that even the best predictive models may not translate into better outcomes.

References

Amarasingham, R., Moore, B.J., Tabak, Y.P., Drazner, M.H., Clark, C.A., Zhang, S., Reed, W.G., Swanson, T.S., Ma, Y., Halm, E.A., 2010. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care 48, 981–988.

Amarasingham, R., Patel, P.C., Toto, K., Nelson, L.L., Swanson, T.S., Moore, B.J., Xie, B., Zhang, S., Alvarez, K.S., Ma, Y., Drazner, M.H., Kollipara, U., Halm, E.A., 2013. Allocating scarce resources in real-time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf 22, 998–1005.

Coleman, E.A., Parry, C., Chalmers, S., Min, S.-J., 2006. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med 166, 1822–1828.

Cram, P., Wachter, R.M., Landon, B.E., 2022. Readmission Reduction as a Hospital Quality Measure: Time to Move on to More Pressing Concerns? JAMA 328, 1589–1590.

Donzé, J., Aujesky, D., Williams, D., Schnipper, J.L., 2013. Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med 173, 632–638.

Donzé, J., John, G., Genné, D., Mancinetti, M., Gouveia, A., Méan, M., Bütikofer, L., Aujesky, D., Schnipper, J., 2023. Effects of a Multimodal Transitional Care Intervention in Patients at High Risk of Readmission: The TARGET-READ Randomized Clinical Trial. JAMA Intern Med 183, 658–668.

Gildersleeve, R., Cooper, P., 2013. Development of an automated, real time surveillance tool for predicting readmissions at a community hospital. Appl Clin Inform 4, 153–169.

Hightower, M., 2023. Is Medicine Ready for AI? N Engl J Med 388, e49.

Jack, B.W., Chetty, V.K., Anthony, D., Greenwald, J.L., Sanchez, G.M., Johnson, A.E., Forsythe, S.R., O’Donnell, J.K., Paasche-Orlow, M.K., Manasseh, C., Martin, S., Culpepper, L., 2009. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med 150, 178–187.

Jencks, S.F., Williams, M.V., Coleman, E.A., 2009. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med 360, 1418–1428.

Naylor, M., Brooten, D., Jones, R., Lavizzo-Mourey, R., Mezey, M., Pauly, M., 1994. Comprehensive discharge planning for the hospitalized elderly. A randomized clinical trial. Ann Intern Med 120, 999–1006.

Ody, C., Msall, L., Dafny, L.S., Grabowski, D.C., Cutler, D.M., 2019. Decreases In Readmissions Credited To Medicare’s Program To Reduce Hospital Readmissions Have Been Overstated. Health Aff (Millwood) 38, 36–43.

Shadmi, E., Flaks-Manov, N., Hoshen, M., Goldman, O., Bitterman, H., Balicer, R.D., 2015. Predicting 30-day readmissions with preadmission electronic health record data. Med Care 53, 283–289.

Wachter, R.M., 2023. The Disappointing Impact of Interventions to Prevent Hospital Readmissions. JAMA Intern Med 183, 668–669.

Zuckerman, R.B., Sheingold, S.H., Orav, E.J., Ruhter, J., Epstein, A.M., 2016. Readmissions, Observation, and the Hospital Readmissions Reduction Program. N Engl J Med 374, 1543–1551.

Monday, June 12, 2023

Please Share Your Slides When You Start Your Talk

I know that some people lament death by Powerpoint, but I find a speaker's slides generally valuable as both a listener and a speaker. Yes, many people (including me!) could do better with slides, but I find them useful. As a speaker, they help me keep track of everything I want to say, and drive home some points with visuals or reference lists. As a listener, I can keep track of what is being talked about, pursue any references I might be interested in further, and view graphics that often bring home points that words (especially spoken-only)  do not.

My general practice when giving a talk is to post a PDF of my slides to my Web site, often with a list of references cited in the talk, and then tweet out a link to the PDF. I also send the PDF to the sponsor of my talk for distribution to those attending. That way, those listening to me do not need to be snapping pictures with their phones or writing things on the slides down when they want to remember something on one of my slides.

Based on my practices, I find it frustrating when speakers do not share their slides. Sometimes they provide them later, which is still far from ideal, but some speakers never provide them at all. When these happen, you see many listeners in the room taking pictures of slides they want to remember.

It would be much easier if all speakers would share at least a PDF of their slides right at the start of their talks. There really cannot be anything so proprietary, especially in an academic talk, that cannot be shared when those in the audience are snapping pictures and, as is sometimes the case, posting the photos to social media.

I hope that speakers I will be listening to in the future will adopt this advice. I certainly plan to keep my present practice in this regard.

Tuesday, June 6, 2023

Reports of My Retirement Are Greatly Exaggerated

As most of my world knows, I stepped down as Chair of the Department of Medical Informatics & Clinical Epidemiology (DMICE) at Oregon Health & Science University (OHSU) at the end of last year. In addition to announcements made by OHSU, I let my world know with postings in this blog, on social media, and via some email lists.

I was gratified by the many nice accolades that I received but one thing was unsettling. This was those who wrote to congratulate me on my retirement. It was unsettling because I wanted to make explicit that I was not retiring, and instead was passing on the leadership to others, initially to interim leaders from the department and eventually culminating in a search for a new permanent Chair. In the meantime, I would continue the teaching, research, writing, and mentoring activities that I enjoy so much.

Nearly a half-year in to my new status, I am indeed enjoying it. I still wake up each work day and get to do the intellectual work of informatics that I love most. My circumstance brings to mind a famous quote, yet which he probably never said, by Samuel Clemons (Mark Twain) that reports of his death were greatly exaggerated.

I am therefore please to state that reports of my retirement are indeed exaggerated. I hope to continue working and be productive in the field for many more years. In fact, my plate is as full as ever, and my main work going forward will continue on the teaching, research, and funded grant projects that currently keep me busy.

My "plate" includes, among other things, the following major activities:

As can be seen, I have plenty to keep me busy for some time to come. I plan to continue my work on all of above while supporting the new leadership of the department as it evolves in novel and interesting directions.

Monday, May 22, 2023

It's All Informatics to Me: Data Analytics, Data Science, Digital Health, AI, and Whatever is Next

In the last decade or so, a number of new names have emerged for the kinds of activities historically studied, disseminated, and taught by those who work in biomedical and health informatics. Each of these areas has emerged as a "hot topic" in biomedicine and healthcare, with resulting academic funding opportunities, new courses or degree programs, and even academic centers or institutes bearing their names.

I suppose I have some skin in this game because I have spent my life's work among those developing the field that I consider to go by the over-arching term of this all, biomedical and health informatics. My concern for the new areas and their names has been their often ignoring the historical and current work of informatics, and with it the decades of research, development, implementation, and evaluation of methods and tools that aim to improve health and healthcare through the use of data and information.

Another ramification of this proliferation of terms is that those not directly working in the field, who may include the leaders of academic and other institutions, may not appreciate the historic role of informatics. Thus I write this post to provide my elaboration of the problem.

One early new term that started this trend about a decade ago was data analytics. As the growth of large amounts of digital data started with the growth of the Internet, the large-scale implementation of electronic health records, and the emergence of wearable and mobile devices, there were new sources of data that could be analyzed for interesting discoveries. From this started the proliferation of academic courses, certificate/degree programs, and centers/institutes devoted to healthcare (and related areas) data analytics.

With the proliferation of machine learning, deep learning, and other methods emerged the new discipline of data science. Again, this was followed by the proliferation of academic courses, certificate/degree programs, and centers/institutes focused on data science. One thoughtful perspective of the relationship between informatics and data science was a paper by Payne et al. in JAMIA Open in 2018, which showed there was not only substantial overlap but also aspects of informatics that go beyond data science. Infomatics includes implementation, evaluation, human-computer interaction, and workflow, to name a few. To use the language of the paper, these were the "efferent" processes of applying knowledge discovered from the "afferent" processes of data science (and informatics) to the real world.

In the meantime, especially with the growth of telehealth during the COVID-19 pandemic, there emerged a new term, digital health. Similar to data science, there was substantial overlap between digital health and informatics, with that overlap focused on many on the efferent processes of Payne et al. However, for many, the work of digital health is really no different than what informatics has historically done and continues to do.

And now there is the new term on the block, which is actually not a new term at all but a re-emergence of an area that was the focus of informatics a half-century ago. This term is of course artificial intelligence (AI). Clearly the advances of modern AI are very real and important. The new AI will likely have profound impacts on biomedicine and health, although we also need to see real-world implementation and evaluation to know what truly works.

My goal here is not to discount any of the important advances that have emerged under the guise of data analytics, data science, digital health, or AI. But rather, to express my concern that plenty of people tout these new disciplines sometimes with ignorance of the historic role that informatics has played in them all. I understand that sometimes funders, institutional leaders, and others want to put their resources into things that are "new." But I also am disappointed when these new areas re-invent the methods and tools of informatics.

The best of informatics imposes a rigor of thinking that these newer disciplines not always apply. Informatics is driven by problems in the domains of health, healthcare, public health, and clinical research. Health and clinical realism is imposed by informatics on the systems as they develop, implemented, and evaluate. I hope that these new areas and what emerges next will not ignore the lessons learned by informatics and incorporate them into their work.

Tuesday, April 11, 2023

Learning Data Science and Machine Learning with Orange

I have written over the years that understanding data science (DS), machine learning (ML), artificial intelligence (AI), and related topics is essential for anyone who works in a profession where tools using them might be employed. This is certainly the case in biomedicine, both in clinical practice and research. In the same sense that all physicians should have basic knowledge of pharmacology, radiology, and other topics, even if they are not experts in the science and methods of those diverse disciplines, all in healthcare should be ready for ML and AI tools coming to their work.

Of course, learning to apply ML and AI historically has required a background in math and computer programming that most in healthcare do not have. Fortunately, there has been emergence in recent years of a number of "no-code" DS and ML tools. In some ways they are analogous to statistical packages, which allow anyone who knows how to use them load data and run statistical tests. Of course, that does not mean that the operator of statistical software will apply or interpret the results of their analyses correctly, which is especially important for large or complex data sets where appropriate statistical expertise is essential.

One no-code ML tool that I find particularly easy to understand and use is Orange Data Mining. Orange was developed by the University of Ljubljana in Slovenia and is an open-source project with a large community supporting it. It runs on Mac and Windows machines, for which binary packages can easily be downloaded and installed. Orange is programmed in Python and, as such, can be extended with Python code. It comes with sample data sets and allows easy loading of new ones. The original focus of Orange was for structured data and machine learning models for operating on them, including the main ML tasks of classification (e.g., logistic regression, kNN, and neural networks) and regression (e.g., linear regression, random forest, and neural networks). There are also methods for visualizing data and evaluating models. The latter includes techniques such as splitting training and test data and cross-fold evaluations are easy to configure. There are also various methods for imputation of missing data. In addition, modules have been developed for image analysis and text mining.

Orange is used by dragging widgets that represent data sets, visualization methods, models, and more on to a canvas called a workflow. Data flows between the widgets by connecting them via simple dragging between widgets. Once one becomes facile in dragging and connecting widgets, workflows can be built up quickly and easily. They can also be saved and reloaded later.

One can start with one of the built-in data sets or use the File widget to load one's own data. Most common rectangular data formats are accepted, such as CSV, tab-delimited, and Excel files. Included among the sample data sets is one for heart disease. The data itself is old (Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J., Sandhu, S., Guppy, K., Lee, S., & Froelicher, V. International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology, 1989, 64, 304-310.) and small (303 records with 13 variables), but the data set offers a good way to get started with Orange using clinical data. The outcome variable (called the target variable in Orange) is coronary artery diameter narrowing and the predictor variables include gender, chest pain, serum cholesterol, systolic blood pressure, and more.

The screenshot below shows a workflow I built using the heart disease data set. As in most data science analyses, one can begin with methods to explore and visualize the data. I connected a number of widgets that allow the data to be viewed in a data table and visualized in the form of feature statistics (also shown below), distributions, and box plots. I also created some models using the Test and Score widget connected to a number of different models appropriate for classification, in this case logistic regression, naive Bayes, and a neural network. One can view the classification accuracy, F1 score, and other evaluation variables (also shown below) and connect the results to widgets that compare ROC curves and generate confusion matrices for each model. (Perhaps not surprisingly, logistic regression works best!) Likewise, I created some models for regression with ST-segment elevation as the outcome variable, using linear regression, random forest, and a neural network, which can also be seen on the workflow.

It is fairly easy to get started using Orange. A set of video tutorials on the Orange YouTube channel covers the basics as well as a number of advanced topics. My colleague Dr. Robert Hoyt and his collaborator, Dr. David Pashtikoff, have been developing workshops and other learning materials using Orange, available at a Web site they have developed called, NoCode Data Science. They plan to publish a textbook, and setting up a login on the site allows access to a number of additional data sets that can be loaded into Orange.

Just as with statistical packages, just because one can load data into Orange and apply ML models does not mean that one is doing so correctly. On the other hand, my simple workflow just scratches the surface of what can be done with Orange, even for this tiny data set.

There are limitations of Orange as a data mining tool. I do not know how scalable it is as data sets get large (all data is kept in computer RAM), and of course any serious ML analysis requires more than the ability to just plug data into models. Nonetheless, Orange can be a great tool for learning machine learning without getting bogged down in coding, debugging, and the like. I see a great future for it in teaching ML as the reach of AI advances.

Friday, March 17, 2023

COVID-19, Three Years On

This month marks three years since the start of lockdowns in the United States for the COVID-19 pandemic. As I imagine is true for most people, few upheavals in my life have had such a profound and sustained impact than three years of the pandemic. The good news is that it seems to be receding to an endemic, and while the SARS-CoV-2 is still a threat to many, life is mostly getting back to some semblance of normal.

Looking back, the year 2020 began like most others.  In January, I traveled to Singapore, as I usually did in that month each year, to culminate an offering of the 10x10 course. Little did I realize that would be my last international trip for a year and a half. My final trip prior to the pandemic was a short jaunt to Salt Lake City to give Biomedical Informatics Grand Rounds at the University of Utah. By then, the people on flights and at airports were thinning out. I had decided right before leaving for Salt Lake City to cancel a planned trip to the annual HIMSS Conference in Orlando the following week. Within a few days, that conference was cancelled. I remember the flight back from Salt Lake City, wondering to myself how long it would be before I got on an airplane again. I never would have believed it would be about a year.

I still remember the dark, early days of the pandemic. Such a lockdown of society was unlike anything I had ever experienced, and our ignorance of the novel SARS-CoV-2 virus was unsettling. All of a sudden, work profoundly transformed from being based in the office to being at home. Fortunately the presence of video tools such as Zoom and WebEx, along with social media, enabled most of us to stay in touch with friends, family, and colleagues. I initially thought the lockdown would be much like the original SARS epidemic of 2003, lasting a few months. I never would have believed that mostly-remote work would continue to now, three years later.

After three years, the world is returning to some semblance of normal. In a few months, the COVID-19 emergency in the US will officially end. I am comfortable with this decision, although the lack of a pandemic does not mean that the threat of this virus or new ones that may emerge is not real. Just as we have ongoing risk of major killers such as cancer and heart disease, for which we can impact through public health measures, the risk of COVID-19 will continue, and the end of the public health emergency does not mean "COVID is over."

I prefer to get my information from medical journals and reputable experts. Recent perspective pieces in New England Journal of Medicine and Nature summarize the situation well. I have been a follower of Dr. Eric Topol's tweeting and now Substack during the pandemic, and a recent perspective from him is enlightening. And the Washington Post had a good op-ed asking a number of COVID-19 experts how they are adapting to the current situation.

My view is that now that COVID-19 is receding to endemic status, we can revisit our approaches to the virus. The virus will be with us going forward, and continue to be a major cause of death and other disability. But I am comfortable with moving to an individual risk assessment approach. I still take precautions. While I no longer routinely mask in public, including indoor settings, I always carry a mask and use it sometimes in crowded indoor settings. I always mask when someone else asks me to do so. I am up to date on vaccines, having had five, including the bivalent booster (as well as one mild natural infection in 2022).

Probably the saddest aspect of the COVID-19 pandemic is the politicization of public health. I get that there are trade-offs in the world between personal liberty and the public good. And I believe an honest re-assessment of what we thought we did right or wrong early in the pandemic is not a bad idea. Clearly we were flying blind in those dark early days. While I do not believe that the early lock-downs and mask requirements were unreasonable at the time, we could have an honest re-assessment now of what is the best approach. By the time token, we can never forget that over a million US lives have been lost and many more disrupted by the virus. Those who pushed unproven treatments and approaches like "let er rip" herd immunity strategies likewise need to called out for their wrong ways.

The threat of COVID-19 is still very real. It remains the third-leading cause of death in the US. It creates a substantial risk for those who are vulnerable, such as those who are immunocompromised or elderly. Data from the CDC and around the world show that while the vaccine does not eradicate the virus or completely prevent its transmission, it does reduce the risk of death and hospitalization, especially for those at most risk.

By the same token, I feel comfortable going mostly maskless, including on airplanes, going to the gym, and at conferences and other public venues. I know that I could again get infected by SARS-CoV-2, but I believe my risk of serious illness is small. I like to think of myself as a COVID "tweener," taking the virus seriously but comfortable returning to relatively normal living. I am ready to return to more aggressive protection should pandemic status recur.

Monday, February 6, 2023

Who Should Program?

Like many who work with computers and other information technology, I enjoy programming. I got my start in programming when I was first exposed to computers in high school in the 1970s, when almost all you could do with computers required programming. Like many at the time, the first programming language I learned was BASIC. I subsequently cycled through many of the common programming languages of the 1970s and 1980s, including PL/1, Pascal, and C (although, for whatever reason, never learned FORTRAN or COBOL). I even dabbled some in assembly language during my medical informatics postdoc fellowship in the late 1980s. Once I started my biomedical informatics academic faculty position in 1990, I began to follow the lead on programming languages from students and other trainees, through Perl, Python, R, and others. 

But as I ascended the ranks of academic leadership, my time for programming diminished. I still have periodic bursts of learning and programming now, but do not have the sustained time available to be productive with them. In some ways, programming is akin to the practice of medicine for clinicians who work in academic informatics - one may be able to keep up with the knowledge of clinical medicine, or even master access to knowledge through modern tools - but still the practice of medicine requires a critical mass of time commitment for one to remain an effective clinician. I learned this in the early 2000s when my informatics work pretty much crowded out my ability to effectively provide patient care, especially trying to practice as a primary care internist.

Interestingly, I sometimes I get into discussions with physicians who are training in informatics and desire to commit a great deal of time and effort to programming. Some of them can pull it off, but as I note to many, they are probably too expensive to be programmers (even as salaries for high-end programmers are reaching the lower strata of physician salaries). Furthermore, physicians have knowledge and perspectives that may be most useful for tasks other than programming, such as conceptual thinking and clinical leadership.

It is actually questionable whether to leave programming to those who are academics, including biomedical researchers. One recent analysis looked at a repository of replication data sets and code published in the Harvard Dataverse Repository between 2010-2020 (1). It found that about three-quarters of R programs failed to execute correctly, although there was some improvement when code-cleaning was applied.

Another challenge is that code sometimes is maintained by a small number or even just one person, the latter in the case of a particle physicist who was the sole maintainer of a 1980s program and has now retired (2). Some have expressed concern for continued progress in the field. Previous work found similar problems in functional MRI (3) and computational chemistry (4) analysis. And many geneticists have had to deal with idiosyncrasies such as the propensity of Microsoft Excel to change gene names to dates (e.g., MARCH1 to 1-Mar) (5).

By the way, I am in no way advocating that modern-day higher-education students, especially in science fields, not learn computer programming as part of their education. Programming is an important skill and discipline, facilitating critical and logical thinking. As computers play an important role in so many knowledge-based disciplines in modern times, it is critical to de-mystify them and allow them to be used most effectively in all scientific disciplines.

Another important development over the years has been the emergence of tools that allow management and manipulation of data with minimal amounts of programming. Probably the earliest example of this was the spreadsheet. Also of note are statistical software packages that allow one to run a myriad of statistical tests with little or no programming. Of course, in both of these examples, one must still understand their data and appropriate inferences drawn from it. Another more recent example of this is packages such as Orange, which provide a visual programming interface to many data science and machine learning methods. I will have more to say about Orange in a future post.

We can also gain insight from the recent development of ChatGPT. One prominent early use has been asking it to write code for specific functions in different languages. While ChatGPT does produce code that looks like what is being asked for, programming support sites like Stack Overflow have banned postings of such code because although they look correct, there are often subtle errors or other problems (6).

It is clear that there will always be a role for learning and exploring with computer programming. But given the complexity of modern software, and the need for programs to be robust, correct, and providing security, those who enjoy programming should be realistic about the commitment required to carry it out at "industrial-strength" levels.

References

1. Trisovic, A., Lau, M.K., Pasquier, T., Crosas, M., 2022. A large-scale study on research code quality and execution. Sci Data 9, 60. https://doi.org/10.1038/s41597-022-01143-6

2. von Hippel, M., 2022. Crucial Computer Program for Particle Physics at Risk of Obsolescence. Quanta Magazine. https://www.quantamagazine.org/crucial-computer-program-for-particle-physics-at-risk-of-obsolescence-20221201/

3. Hirschler, B., 2014. Exclusive: Software issue casts doubt over data used to approve some drugs. Reuters. http://finance.yahoo.com/news/exclusive-software-issue-casts-doubt-over-data-used-121215317--finance.html

4. Eklund, A., Nichols, T.E., Knutsson, H., 2016. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proc Natl Acad Sci U S A 113, 7900–7905. https://doi.org/10.1073/pnas.1602413113

5. Vincent, J., 2020. Scientists rename human genes to stop Microsoft Excel from misreading them as dates [WWW Document]. The Verge. https://www.theverge.com/2020/8/6/21355674/human-genes-rename-microsoft-excel-misreading-dates

6. Vincent, J., 2022. AI-generated answers temporarily banned on coding Q&A site Stack Overflow. The Verge. https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers

Wednesday, January 25, 2023

ChatGPT Confabulates and is Unprofessional

Since my own explorations with ChatGPT in informatics knowledge last month, others using the system have racked up some impressive results. Most pertinent to biomedical informatics is ChatGPT's passing grades on medical student licensing exams. It has also performed well on Wharton MBA and multi-state bar exams.

Does this mean that machine intelligence has equaled human professionals? My subsequent encounters with ChatGPT lead me to maintain my skepticism. Not that ChatGPT is not a tremendous feat, but it is not (yet anyways) capable of performing as a physician, lawyer, or MBA.

This conclusion is evidenced from another recent interaction I had with ChatGPT, the transcript of which is at the bottom of this post. This interaction mirrors some characteristics that many others note. One property that many find is ChatGPT's tendency to confabulate. As seen in my latest interaction, it states things that are simply false. Not that humans do not do this, but no one would rely on a physician, lawyer, or MBA known to flagrantly lie. As such, ChatGPT fails at one core task of being a competent professional in a given field, where there is an ethical responsibility to recognize and state when you do not know something. The best physician is one who knows the limits of his or her knowledge and skills. (Of course, ChatGPT lacks many other capabilities of a physician, from technical skill to empathy.)

This leads me to a thought experiment, perhaps somewhat biased by the fact I am an information retrieval (IR) researcher. If you were confronted with a highly challenging intellectual task, which would you use first, Google (or another search system, maybe PubMed for medicine) or ChatGPT? In my mind, an IR system is much more appealing, giving what IR systems provide by definition, which is pointers to information that may answer questions. Google is far from a perfect search engine, but when addressing a complicated intellectual task, I would much rather take my chances with an IR system output with primary sources listed and accessible than the narrative without citations that comes out of ChatGPT.

I have no doubt that systems like ChatGPT will continue to advance, and that human knowledge workers will find ways to use them beneficially in their work. But I cannot imagine there not being a human in the loop somewhere, especially in medicine when you might need the skills of a surgeon, not to mention a physician with empathy and grace. I am also reminded of the quote from my colleague, radiologist Curtis Langlotz, probably speaking for all of medicine, who said, "Will AI replace radiologists? is the wrong question. The right answer is: Radiologists who use AI will replace radiologists who don’t."

Here is the transcript from my most recent interaction with ChatGPT, with my annotations in italics. I would also note, perhaps in a blow to my own ego, that ChatGPT's answer does not include any of my own IR writings, such as the recent edition of my biomedical IR textbook, which arguably are important to this field.

I entered to Chat GPT, What are some good readings for biomedical information retrieval?

There are many resources available for learning about biomedical information retrieval, including books, articles, and online courses. Here are a few suggestions:

"Introduction to Biomedical Information Retrieval" by C. J. van Rijsbergen is a classic textbook that provides a comprehensive introduction to the field.

Keith van Rijsbergen is a retired computer science professor and IR researcher who has written several books but not one called "Introduction to Biomedical Information Retrieval."

"Information Retrieval in Biomedicine" by Peter Willett is another comprehensive textbook that covers the basics of biomedical information retrieval as well as advanced topics.

Peter Willett is a retired computer science professor and IR researcher who has written several books but not one called "Information Retrieval in Biomedicine."

"Medical Information Retrieval" by J. V. McInerney is a book that provides an overview of the field and covers various topics such as search engines, natural language processing, and data mining.

I am not able to find any book titled "Medical Information Retrieval" or anyone in the IR or informatics field named J. V. McInerney.

"Biomedical Information Retrieval" by J. S. Ogasawara and M. R. Al-Rawi is a book that covers various aspects of biomedical information retrieval, including text mining, ontologies, and evaluation.

I am not able to find any book titled "Biomedical Information Retrieval" or anyone in the IR or informatics fields named J. S. Ogasawara or M. R. Al-Rawi.

"Biomedical Text Mining" by Haifeng Li and Linyun Fu is a book that covers the use of text mining techniques for extracting and analyzing information from biomedical literature.

There is a book named "Biomedical Text Mining" but it does have authors Haifeng Li or Linyun Fu, neither of whom work in IR or informatics.

The Journal of Biomedical Informatics is a peer-reviewed journal that publishes research on various aspects of biomedical information retrieval.

This is true!

The International Conference on Medical Information Retrieval is an annual conference that brings together researchers and practitioners in the field to present and discuss their work.

I am not aware of any such conference by this name.

I hope these suggestions are helpful! Let me know if you have any other questions.

I will!

Saturday, December 31, 2022

Annual Reflections at the End of 2022

I am keeping my tradition in this blog this year of ending the calendar year with a retrospective on the past year and some musings looking forward.

Clearly our lives were still dominated in 2022 by the COVID-19 pandemic. While a fair amount of living has returned to relative normal, there are some changes that will likely never fully reverse. Certainly a major one for myself and my immediate co-workers is the likely continuation of mostly remote work.

Looking back, my last annual year-end post from the "before times" was in 2019. It was an ordinary post, discussing the history of the blog starting from the times of the passage of the HITECH Act, along with the continued growth of my teaching and use of Twitter. I noted over 2600 people having completing the 10x10 course and my surpassing 2600 followers on Twitter. Now at the end of 2022, 10x10 has exceeded 3100 people completing the course and my Twitter followers number over 3500. Of course, with the recent acquisition of Twitter by Elon Musk, the future of this platform is uncertain. That is unfortunate, since I find activities like #ScienceTwitter and #MedTwitter very helpful for keeping up with new advances, some of which informs my teaching.

The day of that 2019 year-end post was the day that the first case of SARS-CoV2 infection was documented in China. Within a month cases started appearing in the US, and by March, the general US lockdown began, with the transition to remote work. While we have made great progress in treatments and vaccinations for COVID-19, my work is still mostly remote. In reality, this is an extension of pre-pandemic times, when I even noted then that my work was mostly asynchronous. And it is not hard to see how it will likely continue mostly that way, as there are plenty of advantages to my office commute being the walk down the stairs to my home office in my basement. I also enjoy being able to do my daily exercise (running or gym) in the late morning, followed by a healthy lunch, and then some quiet meditation after lunch before getting back to work. In the early days of pandemic, the Internet was highly overtaxed in my neighborhood of many professionals working from home, but the infrastructure has been built up such that my current home Internet is well up to the task.

Back to this year, it was a good year for my professional and personal life. I made the decision to hand off the leadership of the department I have served as the one and only Chair. The Dean of the OHSU School of Medicine will make an announcement of the appointment of a new Interim Chair next week. As noted in my kudos posting from last week, my research and teaching work was highly productive in 2022, and the activities and grant funding will keep it going for at least several more years. I aim to keep working focused on research, teaching, mentoring, and writing, and have no plans to retire any time soon. I will also maintain this blog, which with this posting now has 375 postings.

As for the future, COVID-19 will continue to impact all of what we do. But there is less fear to have now, as we understand the virus and the disease it causes much better, and we have means to blunt the worst of its effects. Many quip that I have an optimistic view of the world and future, and I am certain this will continue in 2023 including and beyond COVID-19.

Monday, December 19, 2022

Kudos for the Informatics Professor - 2022 Update

The year 2022 was a highly productive one for the Informatics Professor, and I am pleased to provide my periodic update of my recent accomplishments. This year had another big event in my decision to step down as the one and only Chair of the Oregon Health & Science University (OHSU) Department of Medical Informatics & Clinical Epidemiology. More will be posted about that when my successor is named. Meanwhile, here are my accomplishments for 2022.

I received two prominent awards this year. One was from the American Medical Informatics Association (AMIA), my main professional association, for the William Stead Award for Thought Leadership in Informatics. This award "acknowledges people who have influenced our thinking about informatics, especially improving health and health care in ways that are visionary and transformative." This year I was also elected to the second class of the ACM SIGIR Academy, which was established to "honor and recognize individuals who have made significant, cumulative contributions to the development of the field of information retrieval (IR)." I am thrilled to be honored by the two fields of which I work at the intersection between them.

This year I completed my term as the Second President of the International Academy of Health Sciences Informatics (IAHSI). I will continue for another two years as Past President.

In 2022, I was awarded several new grants, which will help sustain the research and teaching that I plan to continue as I reduce my administrative and leadership roles. Probably the most important of these was the OHSU Training Grant in Biomedical Informatics and Data Science, which was renewed for another five year cycle through 2027. This training grant from the National Library of Medicine (NLM) of the National Institutes of Health (NIH) funds predoctoral (PhD) and postdoctoral trainees in the OHSU Biomedical Informatics Graduate Program. This grant has been funded continuously at OHSU since 1992. This renewal marks the seventh consecutive five-year cycle of funding for OHSU. Its first 30 years make it the second-longest-running training grant at OHSU. The grant's total funding over that time of $21,912,538 make it the largest training grant ever at OHSU. I myself am a product of this training grant, completing my postdoctoral fellowship at Harvard University from 1987-1990.

Another grant from the NLM for which I am PI is an R25 grant to fund the OHSU Biomedical Informatics and Data Science College Undergraduate Summer Internship Program for five years. The grant is among 12 awards issued by the National Library of Medicine to prepare historically underrepresented students for biomedical informatics and data science graduate studies and research careers. The goal of the our program is to provide a rich, full-time experience to students to graduate study and careers in biomedical informatics and data science. In addition, the program aims to motivate students to pursue doctoral training in biomedical informatics and data science that leads to research careers in academia, government, healthcare, and industry. The funding from the R25 grant will be used to support summer experiences for students with backgrounds that have been historically underrepresented in biomedical informatics and data science.

A final new grant for 2022 was from the new NIH Bridge2AI Initiative. I am among a group of faculty from OHSU and Washington University who will be serving as the Skills and Workforce Development Module for one of the four funded data-acquisition projects. The project we are part of will focus on creating data sets that aim to use voice as a biomarker of health.

I also had a good year of publications in 2022, not the least of which was the release of the 8th edition of Health Informatics: Practical Guide, in which I serve as Editor and have authored about a third of the chapters. My journal publications for 2022 included the following:

In 2022, I was also a guest on a couple of podcasts. One was to provide an introduction to an episode on of the For Your Informatics podcast on the gender pay gap among physicians, including those in informatics. The other was to be interviewed for the IAHSI podcast on Exploring Healthcare Interoperability about my work in teaching about healthcare data standards and interoperability.

I also gave a number of invited talks in 2022:

  • Biomedical and Health Informatics: Impact, Challenges, and Opportunities, Nevada Institute of Personalized Medicine, University of Nevada Las Vegas (virtual), January 27, 2022 (slides and references)
  • Competencies and Curricula Across the Spectrum of Learners for Health Informatics, Keynote Talk, IEEE International Conference on Healthcare Informatics, First International Workshop on Health Informatics Education, Rochester, MN, June 11, 2022 (slides and references)
  • Translational Artificial Intelligence (AI): The Need to Translate from Basic Science to Clinical Value, University of Alabama Birmingham Informatics Institute (virtual), September 2, 2022 (slides and references and video)
  • Past, Present, and Future: A Discussion About DMICE Going Forward, OHSU Department of Medical Informatics & Clinical Epidemiology, October 6, 2022 (slides and video)
  • Applying Information Retrieval to the Electronic Health Record for Cohort Discovery and Rare Disease Detection, Brown University Center for Biomedical Informatics, Providence, RI, October 13, 2022 (slides and references)
  • Competencies and Curricula Across the Spectrum of Learners for Biomedical and Health Informatics, International Symposium on Achievements, Milestones and Challenges in Biomedical and Health Informatics, Athens, Greece, October 29, 2002 (slides)

In 2022 I also delivered my annual lecture to first-year OHSU medical students, Information is Different Now That You're a Doctor, on September 2, 2022. I also gave this talk to first-year medical students at Nova Southeastern University (virtual) on June 16, 2022.

I additionally attended my 37th consecutive AMIA Annual Symposium, where I participated in three activities:

  • Panel: Beyond Wrangling and Modeling: Data Science and Machine Learning Competencies and Curricula for The Rest of Us
  • Job Talk Panel for the AMIA Clinical Informatics Fellows (ACIF)
  • Debate: Extension of the Practice Pathway for ABMS Clinical Informatics Board Certification for Physicians in the United States

This year I also served as Co-Chair of the Scientific Program Committee of the conference hosted by Brown University, The Patient and The Practitioner in the Age of Technology: Promoting Healing Relationships.

In 2022, I also maintained my various scientific citation rankings. In the list of Best Computer Science Scientists, I now rank 694th in the US and 1183rd in the world based on a discipline-specific H-index they calculate. I also maintain a presence in the latest iteration of a database of the top 100,000 cited scientists in the world for 22 scientific fields and 176 subfields, where I rank 22,553rd based on a composite formula of citation and publishing variables. In the medical informatics subfield, I rank 13th. Of the 290 individuals affiliated with OHSU, I rank 50th. I also maintain my presence on ranking lists based on the Google Scholar h-index for biomedical informatics and information retrieval.

Finally, this blog is mentioned in various lists of best blogs, including the following:

Saturday, December 17, 2022

ChatGPT Postscript

It has been interesting to read all of the uses that colleagues of mine and others have employed to put ChatGPT to the test, in addition to my tests of learning objectives, a biography, and multiple-choice questions. In one email thread, I joked that perhaps ChatGPT is a Rorschach Test of what interests people concerning the use of artificial intelligence.

I also was pointed to an interesting site that bills itself as a ChatGPT (actually trained on an earlier version of the OpenAI model, GPT-2) Output Detector Demo. I pasted my biography from my first post and this system declared the text had a 99.98% chance of being "fake," i.e., from GPT-2. When I paste in in the biographic paragraph from my own Web page, it declares the text to having a 99.97% chance of being real.

Another interesting reflection is to compare ChatGPT with information retrieval (IR, aka search). Perhaps I am biased as an academic, or someone greatly interested in IR since it is my primary focus of research, but usually when I look for information, I not only want to know the information, but also where it comes from and how trustworthy it is. A big limitation for me of ChatGPT is that it cites no references to back up what it says.

This gets to another academic concern about ChatGPT, which is how it will impact assessment of learning. Although ChatGPT seems to work best for relatively short passages of text that do not require references, there are fortunately many other ways to assess learning.

There have also been some good overviews in the news media about ChatGPT, including an interview of ChatGPT itself. There is also a nice description from the New York Times.

Tuesday, December 13, 2022

ChatGPT, Round Two

After sending some comments about my blog post on my initial excursions with ChatGPT earlier this week, I received a great deal of interesting follow-up. One colleague lamented that we may no longer be able to assess the knowledge of students, especially when teaching in online classes, because of their ability to easily generate answers to questions using systems like ChatGPT, especially if they could add to the text by Googling a few references to add to their answers.

I am not convinced of this, at least not with the use of ChatGPT, whose answers I noted were superficial, not completely correct, and not backed up with citations. Furthermore, asking for short answers to questions is hardly the only way to assess knowledge.

So to put ChatGPT to the next test, I pasted in, one by one, the 10 multiple-choice questions I give for the same unit of my introductory informatics course as the learning objectives of my first assessment.
 
While I have to give ChatGPT some credit for figuring out these were multiple-choice questions, it performed pretty poorly compared to most students in the course. It only answered 5 of the 10 questions correctly, and for two it got wrong, it did not even try to answer the questions, giving the excuse it could not browse the Internet (despite being trained on it!).
 
Some of the incorrect answers would be embarrassing even for a beginning informatics student to not know, such as which US government agency funds informatics training, that "meaningful use" was not part of the Affordable Care Act, and that a machine learning algorithm not involving clinicians does not adhere to Friedman's Fundamental Theorem.
 
The transcript of my interactions is pasted in below. The text entered into ChatGPT did not include the question numbers nor the correct answers indicated by asterisks. The paragraph following each question is how ChatGPT replied. My conclusion is that, for now, those of us who teach informatics in virtual settings do not (yet) have to worry about ChatGPT being used to cheat in our courses.

ChatGPT Transcript - December 10, 2022

1. Which part of biomedical and health informatics is most likely to address the critical informatics issues for the practice of a family physician?
a. Bioinformatics
*b. Clinical informatics
c. Clinical research informatics
d. Imaging informatics
e. Public health informatics

Clinical informatics is most likely to address the critical informatics issues for the practice of a family physician. This subfield of biomedical and health informatics focuses on the use of information technology to support the delivery of healthcare and improve patient outcomes. Clinical informatics encompasses a wide range of topics, including electronic health records, health information exchange, clinical decision support, and population health management. It is concerned with the design, development, and implementation of systems and tools to support the clinical work of healthcare professionals, such as family physicians.

2. Before the era of widespread computer use, the typical hospital physician spent what proportion of his or her time in activities away from the patient during office practice?
*a. 23-39%
b. 50%
c. 61-76%
d. 100%

I'm sorry, but I am not able to browse the internet to answer your question as I am a large language model trained by OpenAI. My knowledge is based solely on the text that I have been trained on, which has a cutoff date of 2021. I do not have any additional information beyond what I have been trained on.

3. Which entity of the US government funds research and training in biomedical and health informatics?
a. Agency for Healthcare Research & Quality
b. Center for Medicare and Medicaid Services
c. National Committee for Vital & Health Statistics
*d. National Library of Medicine

The National Institutes of Health (NIH) is the primary entity of the US government that funds research and training in biomedical and health informatics. The NIH is a federal agency that is part of the Department of Health and Human Services. It is the largest biomedical research agency in the world and provides funding for a wide range of research and training programs in the field of biomedical and health informatics. Some examples of NIH-funded programs in this area include the National Library of Medicine's Training and Education in Biomedical Informatics program and the Clinical and Translational Science Awards program, which support research and training in clinical informatics.

4. Before the emergence of the Affordable Care Act, a number of proposed healthcare reform plans for the US would provide for the establishment of a small number of highly regulated private insurance companies in a given region or state. This type of plan would be described by JR Reid as:
a. Beveridge
*b. Bismarck
c. National health insurance
d. Out-of-pocket

The type of healthcare reform plan described in the question, which involves the establishment of a small number of highly regulated private insurance companies in a given region or state, would be described by JR Reid as a Bismarck-type plan. Bismarck-type plans are named after Otto von Bismarck, the 19th century Prussian statesman who introduced the first national health insurance program in the world. Bismarck-type plans are characterized by a high degree of government control over the healthcare system, with the state playing a central role in regulating the provision of healthcare services and financing. In these plans, private insurance companies are typically required to operate within a regulatory framework set by the government, and may be subject to price controls and other forms of regulation. This type of plan is common in many European countries.

5. What proportion of US healthcare spending is devoted to prescription drugs?
*a. 10%
b. 20%
c. 33%
d. 50%

I'm sorry, but I am not able to browse the internet to answer your question as I am a large language model trained by OpenAI. My knowledge is based solely on the text that I have been trained on, which has a cutoff date of 2021. I do not have any additional information beyond what I have been trained on.

6. Compared to most other wealthy countries, the US has which of the following?
a. More hospital days
*b. More money spent
c. More nurses
d. More physicians

Compared to most other wealthy countries, the US has more money spent on healthcare. The US spends more on healthcare as a percentage of its gross domestic product (GDP) than any other country in the world. According to data from the World Health Organization, the US spent 17.2% of its GDP on healthcare in 2018, compared to an average of 9.6% for other high-income countries. This high level of spending is due in part to the relatively high cost of healthcare services in the US, as well as the large number of uninsured and underinsured individuals in the country.

7. Which of the following is not part of the original Affordable Care Act (ACA)?
*a. Incentives for meaningful use of electronic health records
b. Mandate that individuals buy health insurance
c. Outlawing of insurance denial due to pre-existing medical conditions
d. Programs to establish Accountable Care Organizations (ACOs)

Programs to establish Accountable Care Organizations (ACOs) are not part of the original Affordable Care Act (ACA). ACOs are groups of healthcare providers who come together to coordinate care for a specific group of patients, with the goal of improving the quality of care and reducing costs. ACOs were introduced as part of the Medicare program under the Patient Protection and Affordable Care Act of 2010, but were not part of the original ACA legislation. The ACA included a number of other provisions, such as incentives for the meaningful use of electronic health records, a mandate that individuals buy health insurance, and the outlawing of insurance denial due to pre-existing medical conditions.

8. About 50% of US healthcare spending is accounted for by which proportion of the population?
a. 1%
*b. 5%
c. 25%
d. 50%

Approximately 50% of US healthcare spending is accounted for by 5% of the population. This phenomenon, known as the "5/50 rule," refers to the fact that a small proportion of the population accounts for a disproportionate amount of healthcare spending. The 5/50 rule is often used to illustrate the unequal distribution of healthcare costs in the US, with a small group of individuals with chronic conditions or complex healthcare needs accounting for a significant portion of total spending. This pattern is driven in part by the high cost of treating certain conditions, such as cancer and heart disease, as well as the high cost of specialty drugs and other medical technologies.

9. Which of the following applications does not adhere to Friedman’s fundamental theorem of informatics?
*a. A machine learning algorithm that processes radiology images and determines the diagnoses more accurately than a physician
b. A telemedicine system that allows remote interaction via videoconference
c. A reminder system to administer cancer screening exams on an appropriate schedule
d. An information retrieval (search) system that helps the user select terms to improve search results

A telemedicine system that allows remote interaction via videoconference does not adhere to Friedman's fundamental theorem of informatics. The fundamental theorem of informatics, proposed by Edward H. Friedman, states that "the value of medical information is inversely proportional to the time and place of its creation and directly proportional to the time and place of its use." This theorem suggests that the value of medical information decreases as it becomes older and less relevant, and increases when it is used at the time and place where it is needed. A telemedicine system that allows remote interaction via videoconference does not necessarily adhere to this theorem, as it allows healthcare providers to access and use medical information from different times and places. An example of an application that adheres to the theorem would be a reminder system that administers cancer screening exams on an appropriate schedule, as it uses medical information at the time and place where it is needed to improve patient care.

10. A patient undergoing an appendectomy at a hospital near his suburban home is most likely receiving what type of healthcare?
a. Primary care
*b. Secondary care
c. Tertiary care
d. Quaternary care

A patient undergoing an appendectomy at a hospital near his suburban home is most likely receiving secondary care. Secondary care refers to healthcare services that are provided by specialized medical professionals, such as surgeons, specialists, and other healthcare providers. These services are typically provided in hospitals or other specialized medical facilities, and often require referral from a primary care provider. An appendectomy, which is a surgical procedure to remove the appendix, is an example of a medical procedure that is typically performed in a hospital or other specialized medical facility, and would be classified as secondary care. Primary care, on the other hand, refers to the first level of contact between patients and the healthcare system, and typically involves services such as preventive care, routine check-ups, and treatment of minor illnesses. Tertiary care and quaternary care are higher levels of specialized care that are provided for patients with complex or chronic conditions.