Sunday, December 31, 2023

Annual Reflections at the End of 2023

Keeping with my tradition for this blog, I am ending this calendar year with my retrospective on the past year and some musings looking forward. The number of postings I make to this blog continues to decline each year. It is not that I do not have a lot to say, but rather I do not seem to have as much time to say it.

One thought has been to move to a different medium, such as a podcast. While I do listen to a fair number of podcasts, I still myself seem to prefer my words to be written. If nothing else, text seems more accessible. Another option would be to switch to something like a Substack newsletter, and I subscribe to several of those, but I have no interest in monetizing this blog, so Blogspot still seems like the easiest way to go.

In any case, the year 2023 was a transitional year for me. After serving as inaugural leader of both the Oregon Health & Science University (OHSU) Department of Medical Informatics & Clinical Epidemiology (DMICE) and the OHSU Biomedical Informatics Graduate Program for a couple decades, I decided to step back from both leadership roles. With both entities in good shape, but needing some reinvigoration, not to mention institutional reinvestment, the time seemed right to step back. There is good interim leadership, and my hope is that OHSU will choose to recruit strong permanent leadership in the year ahead.

Despite stepping back from leadership, I am still busy and gratified with my work. I have had more time to devote to research and teaching. My research program in information retrieval (IR, also known as search) continues to be productive and well-funded, with several grants that will last through at least the next few years. Likewise, my teaching activity, mainly centered around my introductory course in biomedical and health informatics that is offered as a graduate course, continuing education course (the famous "10x10"), and as a medical student course, also keeps me busy.

A former Dean-then-President of OHSU reminded me years ago that when you step above the department chair role in academic medicine, you mostly leave behind your academic work. It became clear to me this year that my passion and enthusiasm lie more in the intellectual work of informatics and less in building new programs. I did plenty in the latter over the years but now hope to focus the rest of my career on the intellectual work of the field.

And there is much new intellectual work in informatics and related areas to keep us busy. Another big transition for 2023 was the emergence of generative artificial intelligence (AI). Although generative AI had been around for a half-decade or so, a real turning point was the November 30, 2022 release of ChatGPT. There is clearly as much hype as there is reality, but there is no question its impact in the long run will be profound.

Generative AI impacts me in many ways. Clearly in education, it has caused teachers at all levels to reassess assignments and assessments of students. Almost everything in the assignments that I give students can be aided with ChatGPT. I will be writing more about this in the months ahead, but I agree with those who advocate we incorporate generative AI and not fight it or pretend it does not exist. It will likely require a combination of policy and students adhering to honor codes about how they do their work.

My research in the area of IR is also impacted by generative AI. With the 2020 publication of the fourth edition of my textbook in the field, I was beginning to wonder how much more research that IR needed. Clearly search is ubiquitous and a vital part of any intellectual work. While generative AI has large impacts for how we search for information, especially answers to questions, I note in a paper currently in preprint form and hopefully accepted for publication soon, there is still a role for seeking sources of knowledge and knowing the authority of those who produce those sources.

One subject I wrote a great deal about in recent years but has faded some from the forefront is COVID-19. This virus is still with us and is likely to continue to be so for much time to come. And the other impacts caused by the pandemic, namely the continuation of mostly remote work, may never revert to as it was before the pandemic. I am ambivalent about work that is mostly remote, even though I have been working that way for many years, especially with my travel. But I do appreciate the convenience of my morning commute being a walk down the stairs to my basement home office, although I also miss being around other people when working. Hopefully we will find a better balance in the future.

Heading into 2024, I do look forward to continuing this blog, and hopefully writing more. I also look forward to advances in the field of biomedical and health informatics and continuing to be a part of them.

Friday, September 15, 2023

Physician and Medical Student Competence in AI Must Include Broader Competence in Clinical Informatics

A number of interesting papers on physician competencies in artificial intelligence (AI) have been published recently, and there is no question that all 21st century healthcare professions must have a thorough understanding of the benefits and limitations of AI that they are likely to use in their clinical work.(1-5)

One of my concerns, however, is that the focus on AI and not the larger issues of clinical informatics risks undermining not only a full understanding of the impact of AI, but also most of the other informatics-related knowledge and skills that are important to clinical practice. These include skills in using the electronic health record (EHR); accessing clinical knowledge using search systems; being facile with clinical decision support and health information exchange; protecting privacy and security, engaging patients, their data, and their devices; and applying data in tasks such as population health, public health, and clinical and translational research. At a minimum, these competencies provide a foundation of applying data, information, and knowledge to improve human health and healthcare delivery, but they also inform the application of AI in biomedicine and health.

About a decade ago, some colleagues and I published a paper outlining what we believed were the required competencies in clinical informatics in 21st century practice.(6) These competencies were then used to develop a curriculum in clinical informatics for our medical students.(7) While AI is now a prominent part of biomedicine and health, and a good deal more in society, the initial competencies have, in my opinion, stood the test of time.

There were originally 13 competencies in the list. In 2020, it became apparent that we needed to add an additional competency in machine learning, and in a textbook chapter (8) and blog post, we added that as a 14th competency. Now of course, it is probably better to use AI explicitly in that competency. As such, I present a new version of the list of competencies in clinical informatics for medical education, which of course applies to all health professions students and practitioners. (Textual version in the Appendix below.)

Competencies

References

  1. Ötleş, E., James, C.A., Lomis, K.D., Woolliscroft, J.O., 2022. Teaching artificial intelligence as a fundamental toolset of medicine. Cell Rep Med 3, 100824.
  2. James, C.A., Wachter, R.M., Woolliscroft, J.O., 2022. Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence. JAMA 327, 1333–1334.
  3. Russell, R.G., Lovett Novak, L., Patel, M., Garvey, K.V., Craig, K.J.T., Jackson, G.P., Moore, D., Miller, B.M., 2023. Competencies for the Use of Artificial Intelligence-Based Tools by Health Care Professionals. Acad Med 98, 348–356.
  4. Liaw, W., Kueper, J.K., Lin, S., Bazemore, A., Kakadiaris, I., 2022. Competencies for the Use of Artificial Intelligence in Primary Care. Ann Fam Med 20, 559–563.
  5. Goodman, K.E., Rodman, A.M., Morgan, D.J., 2023. Preparing Physicians for the Clinical Algorithm Era. N Engl J Med.
  6. Hersh, W.R., Gorman, P.N., Biagioli, F.E., Mohan, V., Gold, J.A., Mejicano, G.C., 2014. Beyond information retrieval and electronic health record use: competencies in clinical informatics for medical education. Adv Med Educ Pract 5, 205–212.
  7. Hersh, W., Biagioli, F., Scholl, G., Gold, J., Mohan, V., Kassakian, S., Kerns, S., Gorman, P., 2017. From Competencies to Competence: Model, Approach, and Lessons Learned from Implementing a Clinical Informatics Curriculum for Medical Students, in: Health Professionals’ Education in the Age of Clinical Information Systems, Mobile Computing and Social Networks. Elsevier, pp. 269–287.
  8. Hersh, W., Ehrenfeld, J., 2020. Clinical Informatics, in: Health Systems Science, 2nd Edition. pp. 156–170.
Appendix - Competencies in Clinical Informatics for Health Professions Education (textual form)
  1. Find, search, and apply knowledge-based information to patient care and other clinical tasks
  2. Effectively read from, and write to, the electronic health record (EHR) for patient care and other clinical activities
  3. Use and guide implementation of clinical decision support (CDS)
  4. Provide care using population health management approaches
  5. Protect patient privacy and security
  6. Use information technology to improve patient safety
  7. Engage in quality measurement selection and improvement
  8. Use health information exchange (HIE) to identify and access patient information across clinical settings
  9. Engage patients to improve their health and care delivery though personal health records and patient portals
  10. Maintain professionalism in use of information technology tools, including social media
  11. Provide clinical care via telemedicine and refer patients as indicated
  12. Apply personalized/precision medicine
  13. Participate in practice-based clinical and translational research
  14. Use and critique artificial intelligence (AI) applications in clinical care

Wednesday, September 6, 2023

More Evidence That We Need More Evidence for AI Interventions

In a previous post, I related the case of an excellent model that predicted hospital readmission yet when used in the context of real-world effort to reduce admissions was not able to lower the rate.

Some new studies highlight this scenario again of excellent models and systems that, when studied, do not show real-world benefit. A couple papers in Annals of Internal Medicine find a similar scenario for one of the earliest uses of artificial intelligence (AI) to demonstrate success, which is computer-aided detection (CADe) of polyps during colonoscopy results. A systematic review of previous clinical trials found that while there was an increased in detection of pre-cancerous adenomas but not of advanced adenomas and in higher rates of unnecessary removal of non-neoplastic polyps.[1]

The journal also featured a new randomized controlled trial (RCT) that showed no significant difference in advanced colorectal neoplasia detection rate (34.8% with intervention vs. 34.6% for controls) or mean number of advanced colorectal neoplasias detected per colonoscopy.[2]

An accompanying editorial notes the challenges in implementing AI in real world, which may impact RCT results, but we must build evidence base to support use.[3]

On a different clinical topic of predicting future trajectories in estimated glomerular filtration rate (eGFR) in adults with type 2 diabetes and chronic kidney disease, a new study in JAMA Network Open found that the new model excels over previous models in more accurate estimation of risk earlier in the disease course.[4] However, an accompanying editorial notes that while this model provides more accuracy, the benefit to those in this phase of the disease might be outweighed by "inappropriate avoidance of intravenous contrast, patient anxiety, and unnecessary testing with its associated costs."[5] What is really needed, the author notes, are clinical trials to validate use of the model.

The research into these clinical applications of AI is important, and we must carry out the "basic science" research of them. But then we must move on to the next step of clinical application and studies that evaluate such systems in clinical trials or other appropriate evaluation methods.

References

1. Hassan, C., Spadaccini, M., Mori, Y., Foroutan, F., Facciorusso, A., Gkolfakis, P., Tziatzios, G., Triantafyllou, K., Antonelli, G., Khalaf, K., Rizkala, T., Vandvik, P.O., Fugazza, A., Rondonotti, E., Glissen-Brown, J.R., Kamba, S., Maida, M., Correale, L., Bhandari, P., Jover, R., Sharma, P., Rex, D.K., Repici, A., 2023. Real-Time Computer-Aided Detection of Colorectal Neoplasia During Colonoscopy : A Systematic Review and Meta-analysis. Ann Intern Med. https://doi.org/10.7326/M22-3678

2. Mangas-Sanjuan, C., de-Castro, L., Cubiella, J., Díez-Redondo, P., Suárez, A., Pellisé, M., Fernández, N., Zarraquiños, S., Núñez-Rodríguez, H., Álvarez-García, V., Ortiz, O., Sala-Miquel, N., Zapater, P., Jover, R., CADILLAC study investigators*, 2023. Role of Artificial Intelligence in Colonoscopy Detection of Advanced Neoplasias : A Randomized Trial. Ann Intern Med. https://doi.org/10.7326/M22-2619

3. Shung, D.L., 2023. From Tool to Team Member: A Second Set of Eyes for Polyp Detection. Ann Intern Med. https://doi.org/10.7326/M23-2022

4. Gregorich, M., Kammer, M., Heinzel, A., Böger, C., Eckardt, K.-U., Heerspink, H.L., Jung, B., Mayer, G., Meiselbach, H., Schmid, M., Schultheiss, U.T., Heinze, G., Oberbauer, R., BEAt-DKD Consortium, 2023. Development and Validation of a Prediction Model for Future Estimated Glomerular Filtration Rate in People With Type 2 Diabetes and Chronic Kidney Disease. JAMA Netw Open 6, e231870. https://doi.org/10.1001/jamanetworkopen.2023.1870

5. Sanghavi, S.F., 2023. Modeling Future Estimated Glomerular Filtration Rate in Patients With Diabetes-Are There Risks to Early Risk Stratification? JAMA Netw Open 6, e238652. https://doi.org/10.1001/jamanetworkopen.2023.8652

Monday, July 17, 2023

Lessons About Clinical AI From a Randomized Controlled Trial on Hospital Readmissions

Our news ands science feeds are so filled these days with everything artificial intelligence (AI), from large language models to their impacts on society, that we may miss important studies on other informatics topics, some of which may have lessons for AI. This is the case for a recent randomized controlled trial (RCT) on a hospital readmissions initiative (Donzé, 2023) and an accompanying editorial putting it in larger perspective.(Wachter, 2023)

Some may remember about a decade ago, when “data analytics” was the rage, and health insurance payors were noting with alarm the growing rate of hospital readmissions. The cost and frequency of readmissions was highlighted in a study finding as many as 20% of hospital admissions were readmissions within a month of previous discharge.(Jencks, 2009) Before this, several hospital discharge planning programs had been studied and found to reduce readmissions.(Naylor, 1994; Coleman, 2006; Jack, 2009) This situation led the US Centers for Medicare and Medicaid Services (CMS) to implement the Hospital Readmissions Reduction Program (HRRP) as part of the Affordable Care Act. Starting in 2012, the HRRP required public reporting of readmission rates for three common diseases: myocardial infarction, heart failure, and pneumonia, with hospitals having unusually high rates of readmissions being penalized.

Around the time that the HRRP was implemented, the Health Information Technology for Economic and Clinical Health (HITECH) Act was incentivizing the adoption of the electronic health record (EHR). This provided new unprecedented sources of data, and every predictive analyst set out to find models that used EHR data to predict patients most likely to be readmitted, with the goal of identifying those who could presumably be followed more closely and have readmissions averted. Numerous studies were published using models based on EHR data to predict patients at risk for readmission.(Amarasingham, 2010; Donzé, 2013; Gildersleeve, 2013; Shadmi, 2015)

Despite the plethora of predictive models, few interventions have actually been undertaken that demonstrate improved outcomes of care. One study found that the use of a readmission risk tool intervention reduced risk of readmission for patients with congestive heart failure but not those with acute myocardial infarction or pneumonia (Amarasingham, 2013) Another observational study found that readmissions did decline with the implementation of the new rule initially and for the targeted diseases, but less so for other conditions.(Zuckerman 2016) Other have noted that the program has had marginal benefit (Ody, 2019) and redirecting resources that might be devoted to other quality improvement efforts.(Cram, 2022)

Earlier this year, an RCT was published that assessed a multimodal care intervention aimed at reducing readmissions (Donzé, 2023). Carried out in four medium-to-large teaching hospitals in Switzerland, the study implemented the best-known predictive model for risk of readmission yet found no benefit for an intervention that included it. As noted in the accompanying editorial, just because we can predict something does not mean we can necessarily do something about it.(Wachter, 2023)

Why is this RCT pertinent to AI? Mainly because just being able to predict diagnoses or outcomes is not enough. I have written about this myself in this blog over the years. Whether we are talking about predictive analytics, next-generation data science, or AI, no matter how sophisticated our models or compelling our predictive abilities, we must demonstrate how these systems impact outcomes, whether improved patient health or healthcare system processes.

How do we demonstrate the value of AI in health and healthcare? First, we must implement these systems in the real world. There is a great deal being written about the promise and challenges for implementing AI in clinical settings.(Hightower, 2023) But even implementing AI in the real world is not enough. We must also demonstrate that AI can lead to better outcomes, whether improved health or treatment of disease of patients or better delivery of healthcare services. One perspective to think about this is the continuum of translational research. As with all biomedical advances, we start with the basic science, and demonstrating value in the laboratory, which in this case is the use of curated data sets. The next step is to implement systems in real-world healthcare or community settings. Clearly these are complex interventions.

Ultimately, however, we must demonstrate experimentally that health or healthcare is improved by the AI intervention. The best experimental evidence comes from controlled experiments, ideally RCTs. And granted, such trials may be more complicated than the classic RCT of comparing a medication versus a placebo. These RCTs may involve complex designs, and results may be difficult to interpret if the trial does not show benefit. But building the evidence base for AI is essential, and studies like this from Donzé et al. demonstrate that even the best predictive models may not translate into better outcomes.

References

Amarasingham, R., Moore, B.J., Tabak, Y.P., Drazner, M.H., Clark, C.A., Zhang, S., Reed, W.G., Swanson, T.S., Ma, Y., Halm, E.A., 2010. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care 48, 981–988.

Amarasingham, R., Patel, P.C., Toto, K., Nelson, L.L., Swanson, T.S., Moore, B.J., Xie, B., Zhang, S., Alvarez, K.S., Ma, Y., Drazner, M.H., Kollipara, U., Halm, E.A., 2013. Allocating scarce resources in real-time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf 22, 998–1005.

Coleman, E.A., Parry, C., Chalmers, S., Min, S.-J., 2006. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med 166, 1822–1828.

Cram, P., Wachter, R.M., Landon, B.E., 2022. Readmission Reduction as a Hospital Quality Measure: Time to Move on to More Pressing Concerns? JAMA 328, 1589–1590.

Donzé, J., Aujesky, D., Williams, D., Schnipper, J.L., 2013. Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med 173, 632–638.

Donzé, J., John, G., Genné, D., Mancinetti, M., Gouveia, A., Méan, M., Bütikofer, L., Aujesky, D., Schnipper, J., 2023. Effects of a Multimodal Transitional Care Intervention in Patients at High Risk of Readmission: The TARGET-READ Randomized Clinical Trial. JAMA Intern Med 183, 658–668.

Gildersleeve, R., Cooper, P., 2013. Development of an automated, real time surveillance tool for predicting readmissions at a community hospital. Appl Clin Inform 4, 153–169.

Hightower, M., 2023. Is Medicine Ready for AI? N Engl J Med 388, e49.

Jack, B.W., Chetty, V.K., Anthony, D., Greenwald, J.L., Sanchez, G.M., Johnson, A.E., Forsythe, S.R., O’Donnell, J.K., Paasche-Orlow, M.K., Manasseh, C., Martin, S., Culpepper, L., 2009. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med 150, 178–187.

Jencks, S.F., Williams, M.V., Coleman, E.A., 2009. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med 360, 1418–1428.

Naylor, M., Brooten, D., Jones, R., Lavizzo-Mourey, R., Mezey, M., Pauly, M., 1994. Comprehensive discharge planning for the hospitalized elderly. A randomized clinical trial. Ann Intern Med 120, 999–1006.

Ody, C., Msall, L., Dafny, L.S., Grabowski, D.C., Cutler, D.M., 2019. Decreases In Readmissions Credited To Medicare’s Program To Reduce Hospital Readmissions Have Been Overstated. Health Aff (Millwood) 38, 36–43.

Shadmi, E., Flaks-Manov, N., Hoshen, M., Goldman, O., Bitterman, H., Balicer, R.D., 2015. Predicting 30-day readmissions with preadmission electronic health record data. Med Care 53, 283–289.

Wachter, R.M., 2023. The Disappointing Impact of Interventions to Prevent Hospital Readmissions. JAMA Intern Med 183, 668–669.

Zuckerman, R.B., Sheingold, S.H., Orav, E.J., Ruhter, J., Epstein, A.M., 2016. Readmissions, Observation, and the Hospital Readmissions Reduction Program. N Engl J Med 374, 1543–1551.

Monday, June 12, 2023

Please Share Your Slides When You Start Your Talk

I know that some people lament death by Powerpoint, but I find a speaker's slides generally valuable as both a listener and a speaker. Yes, many people (including me!) could do better with slides, but I find them useful. As a speaker, they help me keep track of everything I want to say, and drive home some points with visuals or reference lists. As a listener, I can keep track of what is being talked about, pursue any references I might be interested in further, and view graphics that often bring home points that words (especially spoken-only)  do not.

My general practice when giving a talk is to post a PDF of my slides to my Web site, often with a list of references cited in the talk, and then tweet out a link to the PDF. I also send the PDF to the sponsor of my talk for distribution to those attending. That way, those listening to me do not need to be snapping pictures with their phones or writing things on the slides down when they want to remember something on one of my slides.

Based on my practices, I find it frustrating when speakers do not share their slides. Sometimes they provide them later, which is still far from ideal, but some speakers never provide them at all. When these happen, you see many listeners in the room taking pictures of slides they want to remember.

It would be much easier if all speakers would share at least a PDF of their slides right at the start of their talks. There really cannot be anything so proprietary, especially in an academic talk, that cannot be shared when those in the audience are snapping pictures and, as is sometimes the case, posting the photos to social media.

I hope that speakers I will be listening to in the future will adopt this advice. I certainly plan to keep my present practice in this regard.

Tuesday, June 6, 2023

Reports of My Retirement Are Greatly Exaggerated

As most of my world knows, I stepped down as Chair of the Department of Medical Informatics & Clinical Epidemiology (DMICE) at Oregon Health & Science University (OHSU) at the end of last year. In addition to announcements made by OHSU, I let my world know with postings in this blog, on social media, and via some email lists.

I was gratified by the many nice accolades that I received but one thing was unsettling. This was those who wrote to congratulate me on my retirement. It was unsettling because I wanted to make explicit that I was not retiring, and instead was passing on the leadership to others, initially to interim leaders from the department and eventually culminating in a search for a new permanent Chair. In the meantime, I would continue the teaching, research, writing, and mentoring activities that I enjoy so much.

Nearly a half-year in to my new status, I am indeed enjoying it. I still wake up each work day and get to do the intellectual work of informatics that I love most. My circumstance brings to mind a famous quote, yet which he probably never said, by Samuel Clemons (Mark Twain) that reports of his death were greatly exaggerated.

I am therefore please to state that reports of my retirement are indeed exaggerated. I hope to continue working and be productive in the field for many more years. In fact, my plate is as full as ever, and my main work going forward will continue on the teaching, research, and funded grant projects that currently keep me busy.

My "plate" includes, among other things, the following major activities:

As can be seen, I have plenty to keep me busy for some time to come. I plan to continue my work on all of above while supporting the new leadership of the department as it evolves in novel and interesting directions.

Monday, May 22, 2023

It's All Informatics to Me: Data Analytics, Data Science, Digital Health, AI, and Whatever is Next

In the last decade or so, a number of new names have emerged for the kinds of activities historically studied, disseminated, and taught by those who work in biomedical and health informatics. Each of these areas has emerged as a "hot topic" in biomedicine and healthcare, with resulting academic funding opportunities, new courses or degree programs, and even academic centers or institutes bearing their names.

I suppose I have some skin in this game because I have spent my life's work among those developing the field that I consider to go by the over-arching term of this all, biomedical and health informatics. My concern for the new areas and their names has been their often ignoring the historical and current work of informatics, and with it the decades of research, development, implementation, and evaluation of methods and tools that aim to improve health and healthcare through the use of data and information.

Another ramification of this proliferation of terms is that those not directly working in the field, who may include the leaders of academic and other institutions, may not appreciate the historic role of informatics. Thus I write this post to provide my elaboration of the problem.

One early new term that started this trend about a decade ago was data analytics. As the growth of large amounts of digital data started with the growth of the Internet, the large-scale implementation of electronic health records, and the emergence of wearable and mobile devices, there were new sources of data that could be analyzed for interesting discoveries. From this started the proliferation of academic courses, certificate/degree programs, and centers/institutes devoted to healthcare (and related areas) data analytics.

With the proliferation of machine learning, deep learning, and other methods emerged the new discipline of data science. Again, this was followed by the proliferation of academic courses, certificate/degree programs, and centers/institutes focused on data science. One thoughtful perspective of the relationship between informatics and data science was a paper by Payne et al. in JAMIA Open in 2018, which showed there was not only substantial overlap but also aspects of informatics that go beyond data science. Infomatics includes implementation, evaluation, human-computer interaction, and workflow, to name a few. To use the language of the paper, these were the "efferent" processes of applying knowledge discovered from the "afferent" processes of data science (and informatics) to the real world.

In the meantime, especially with the growth of telehealth during the COVID-19 pandemic, there emerged a new term, digital health. Similar to data science, there was substantial overlap between digital health and informatics, with that overlap focused on many on the efferent processes of Payne et al. However, for many, the work of digital health is really no different than what informatics has historically done and continues to do.

And now there is the new term on the block, which is actually not a new term at all but a re-emergence of an area that was the focus of informatics a half-century ago. This term is of course artificial intelligence (AI). Clearly the advances of modern AI are very real and important. The new AI will likely have profound impacts on biomedicine and health, although we also need to see real-world implementation and evaluation to know what truly works.

My goal here is not to discount any of the important advances that have emerged under the guise of data analytics, data science, digital health, or AI. But rather, to express my concern that plenty of people tout these new disciplines sometimes with ignorance of the historic role that informatics has played in them all. I understand that sometimes funders, institutional leaders, and others want to put their resources into things that are "new." But I also am disappointed when these new areas re-invent the methods and tools of informatics.

The best of informatics imposes a rigor of thinking that these newer disciplines not always apply. Informatics is driven by problems in the domains of health, healthcare, public health, and clinical research. Health and clinical realism is imposed by informatics on the systems as they develop, implemented, and evaluate. I hope that these new areas and what emerges next will not ignore the lessons learned by informatics and incorporate them into their work.

Tuesday, April 11, 2023

Learning Data Science and Machine Learning with Orange

I have written over the years that understanding data science (DS), machine learning (ML), artificial intelligence (AI), and related topics is essential for anyone who works in a profession where tools using them might be employed. This is certainly the case in biomedicine, both in clinical practice and research. In the same sense that all physicians should have basic knowledge of pharmacology, radiology, and other topics, even if they are not experts in the science and methods of those diverse disciplines, all in healthcare should be ready for ML and AI tools coming to their work.

Of course, learning to apply ML and AI historically has required a background in math and computer programming that most in healthcare do not have. Fortunately, there has been emergence in recent years of a number of "no-code" DS and ML tools. In some ways they are analogous to statistical packages, which allow anyone who knows how to use them load data and run statistical tests. Of course, that does not mean that the operator of statistical software will apply or interpret the results of their analyses correctly, which is especially important for large or complex data sets where appropriate statistical expertise is essential.

One no-code ML tool that I find particularly easy to understand and use is Orange Data Mining. Orange was developed by the University of Ljubljana in Slovenia and is an open-source project with a large community supporting it. It runs on Mac and Windows machines, for which binary packages can easily be downloaded and installed. Orange is programmed in Python and, as such, can be extended with Python code. It comes with sample data sets and allows easy loading of new ones. The original focus of Orange was for structured data and machine learning models for operating on them, including the main ML tasks of classification (e.g., logistic regression, kNN, and neural networks) and regression (e.g., linear regression, random forest, and neural networks). There are also methods for visualizing data and evaluating models. The latter includes techniques such as splitting training and test data and cross-fold evaluations are easy to configure. There are also various methods for imputation of missing data. In addition, modules have been developed for image analysis and text mining.

Orange is used by dragging widgets that represent data sets, visualization methods, models, and more on to a canvas called a workflow. Data flows between the widgets by connecting them via simple dragging between widgets. Once one becomes facile in dragging and connecting widgets, workflows can be built up quickly and easily. They can also be saved and reloaded later.

One can start with one of the built-in data sets or use the File widget to load one's own data. Most common rectangular data formats are accepted, such as CSV, tab-delimited, and Excel files. Included among the sample data sets is one for heart disease. The data itself is old (Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J., Sandhu, S., Guppy, K., Lee, S., & Froelicher, V. International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology, 1989, 64, 304-310.) and small (303 records with 13 variables), but the data set offers a good way to get started with Orange using clinical data. The outcome variable (called the target variable in Orange) is coronary artery diameter narrowing and the predictor variables include gender, chest pain, serum cholesterol, systolic blood pressure, and more.

The screenshot below shows a workflow I built using the heart disease data set. As in most data science analyses, one can begin with methods to explore and visualize the data. I connected a number of widgets that allow the data to be viewed in a data table and visualized in the form of feature statistics (also shown below), distributions, and box plots. I also created some models using the Test and Score widget connected to a number of different models appropriate for classification, in this case logistic regression, naive Bayes, and a neural network. One can view the classification accuracy, F1 score, and other evaluation variables (also shown below) and connect the results to widgets that compare ROC curves and generate confusion matrices for each model. (Perhaps not surprisingly, logistic regression works best!) Likewise, I created some models for regression with ST-segment elevation as the outcome variable, using linear regression, random forest, and a neural network, which can also be seen on the workflow.

It is fairly easy to get started using Orange. A set of video tutorials on the Orange YouTube channel covers the basics as well as a number of advanced topics. My colleague Dr. Robert Hoyt and his collaborator, Dr. David Pashtikoff, have been developing workshops and other learning materials using Orange, available at a Web site they have developed called, NoCode Data Science. They plan to publish a textbook, and setting up a login on the site allows access to a number of additional data sets that can be loaded into Orange.

Just as with statistical packages, just because one can load data into Orange and apply ML models does not mean that one is doing so correctly. On the other hand, my simple workflow just scratches the surface of what can be done with Orange, even for this tiny data set.

There are limitations of Orange as a data mining tool. I do not know how scalable it is as data sets get large (all data is kept in computer RAM), and of course any serious ML analysis requires more than the ability to just plug data into models. Nonetheless, Orange can be a great tool for learning machine learning without getting bogged down in coding, debugging, and the like. I see a great future for it in teaching ML as the reach of AI advances.

Friday, March 17, 2023

COVID-19, Three Years On

This month marks three years since the start of lockdowns in the United States for the COVID-19 pandemic. As I imagine is true for most people, few upheavals in my life have had such a profound and sustained impact than three years of the pandemic. The good news is that it seems to be receding to an endemic, and while the SARS-CoV-2 is still a threat to many, life is mostly getting back to some semblance of normal.

Looking back, the year 2020 began like most others.  In January, I traveled to Singapore, as I usually did in that month each year, to culminate an offering of the 10x10 course. Little did I realize that would be my last international trip for a year and a half. My final trip prior to the pandemic was a short jaunt to Salt Lake City to give Biomedical Informatics Grand Rounds at the University of Utah. By then, the people on flights and at airports were thinning out. I had decided right before leaving for Salt Lake City to cancel a planned trip to the annual HIMSS Conference in Orlando the following week. Within a few days, that conference was cancelled. I remember the flight back from Salt Lake City, wondering to myself how long it would be before I got on an airplane again. I never would have believed it would be about a year.

I still remember the dark, early days of the pandemic. Such a lockdown of society was unlike anything I had ever experienced, and our ignorance of the novel SARS-CoV-2 virus was unsettling. All of a sudden, work profoundly transformed from being based in the office to being at home. Fortunately the presence of video tools such as Zoom and WebEx, along with social media, enabled most of us to stay in touch with friends, family, and colleagues. I initially thought the lockdown would be much like the original SARS epidemic of 2003, lasting a few months. I never would have believed that mostly-remote work would continue to now, three years later.

After three years, the world is returning to some semblance of normal. In a few months, the COVID-19 emergency in the US will officially end. I am comfortable with this decision, although the lack of a pandemic does not mean that the threat of this virus or new ones that may emerge is not real. Just as we have ongoing risk of major killers such as cancer and heart disease, for which we can impact through public health measures, the risk of COVID-19 will continue, and the end of the public health emergency does not mean "COVID is over."

I prefer to get my information from medical journals and reputable experts. Recent perspective pieces in New England Journal of Medicine and Nature summarize the situation well. I have been a follower of Dr. Eric Topol's tweeting and now Substack during the pandemic, and a recent perspective from him is enlightening. And the Washington Post had a good op-ed asking a number of COVID-19 experts how they are adapting to the current situation.

My view is that now that COVID-19 is receding to endemic status, we can revisit our approaches to the virus. The virus will be with us going forward, and continue to be a major cause of death and other disability. But I am comfortable with moving to an individual risk assessment approach. I still take precautions. While I no longer routinely mask in public, including indoor settings, I always carry a mask and use it sometimes in crowded indoor settings. I always mask when someone else asks me to do so. I am up to date on vaccines, having had five, including the bivalent booster (as well as one mild natural infection in 2022).

Probably the saddest aspect of the COVID-19 pandemic is the politicization of public health. I get that there are trade-offs in the world between personal liberty and the public good. And I believe an honest re-assessment of what we thought we did right or wrong early in the pandemic is not a bad idea. Clearly we were flying blind in those dark early days. While I do not believe that the early lock-downs and mask requirements were unreasonable at the time, we could have an honest re-assessment now of what is the best approach. By the time token, we can never forget that over a million US lives have been lost and many more disrupted by the virus. Those who pushed unproven treatments and approaches like "let er rip" herd immunity strategies likewise need to called out for their wrong ways.

The threat of COVID-19 is still very real. It remains the third-leading cause of death in the US. It creates a substantial risk for those who are vulnerable, such as those who are immunocompromised or elderly. Data from the CDC and around the world show that while the vaccine does not eradicate the virus or completely prevent its transmission, it does reduce the risk of death and hospitalization, especially for those at most risk.

By the same token, I feel comfortable going mostly maskless, including on airplanes, going to the gym, and at conferences and other public venues. I know that I could again get infected by SARS-CoV-2, but I believe my risk of serious illness is small. I like to think of myself as a COVID "tweener," taking the virus seriously but comfortable returning to relatively normal living. I am ready to return to more aggressive protection should pandemic status recur.

Monday, February 6, 2023

Who Should Program?

Like many who work with computers and other information technology, I enjoy programming. I got my start in programming when I was first exposed to computers in high school in the 1970s, when almost all you could do with computers required programming. Like many at the time, the first programming language I learned was BASIC. I subsequently cycled through many of the common programming languages of the 1970s and 1980s, including PL/1, Pascal, and C (although, for whatever reason, never learned FORTRAN or COBOL). I even dabbled some in assembly language during my medical informatics postdoc fellowship in the late 1980s. Once I started my biomedical informatics academic faculty position in 1990, I began to follow the lead on programming languages from students and other trainees, through Perl, Python, R, and others. 

But as I ascended the ranks of academic leadership, my time for programming diminished. I still have periodic bursts of learning and programming now, but do not have the sustained time available to be productive with them. In some ways, programming is akin to the practice of medicine for clinicians who work in academic informatics - one may be able to keep up with the knowledge of clinical medicine, or even master access to knowledge through modern tools - but still the practice of medicine requires a critical mass of time commitment for one to remain an effective clinician. I learned this in the early 2000s when my informatics work pretty much crowded out my ability to effectively provide patient care, especially trying to practice as a primary care internist.

Interestingly, I sometimes I get into discussions with physicians who are training in informatics and desire to commit a great deal of time and effort to programming. Some of them can pull it off, but as I note to many, they are probably too expensive to be programmers (even as salaries for high-end programmers are reaching the lower strata of physician salaries). Furthermore, physicians have knowledge and perspectives that may be most useful for tasks other than programming, such as conceptual thinking and clinical leadership.

It is actually questionable whether to leave programming to those who are academics, including biomedical researchers. One recent analysis looked at a repository of replication data sets and code published in the Harvard Dataverse Repository between 2010-2020 (1). It found that about three-quarters of R programs failed to execute correctly, although there was some improvement when code-cleaning was applied.

Another challenge is that code sometimes is maintained by a small number or even just one person, the latter in the case of a particle physicist who was the sole maintainer of a 1980s program and has now retired (2). Some have expressed concern for continued progress in the field. Previous work found similar problems in functional MRI (3) and computational chemistry (4) analysis. And many geneticists have had to deal with idiosyncrasies such as the propensity of Microsoft Excel to change gene names to dates (e.g., MARCH1 to 1-Mar) (5).

By the way, I am in no way advocating that modern-day higher-education students, especially in science fields, not learn computer programming as part of their education. Programming is an important skill and discipline, facilitating critical and logical thinking. As computers play an important role in so many knowledge-based disciplines in modern times, it is critical to de-mystify them and allow them to be used most effectively in all scientific disciplines.

Another important development over the years has been the emergence of tools that allow management and manipulation of data with minimal amounts of programming. Probably the earliest example of this was the spreadsheet. Also of note are statistical software packages that allow one to run a myriad of statistical tests with little or no programming. Of course, in both of these examples, one must still understand their data and appropriate inferences drawn from it. Another more recent example of this is packages such as Orange, which provide a visual programming interface to many data science and machine learning methods. I will have more to say about Orange in a future post.

We can also gain insight from the recent development of ChatGPT. One prominent early use has been asking it to write code for specific functions in different languages. While ChatGPT does produce code that looks like what is being asked for, programming support sites like Stack Overflow have banned postings of such code because although they look correct, there are often subtle errors or other problems (6).

It is clear that there will always be a role for learning and exploring with computer programming. But given the complexity of modern software, and the need for programs to be robust, correct, and providing security, those who enjoy programming should be realistic about the commitment required to carry it out at "industrial-strength" levels.

References

1. Trisovic, A., Lau, M.K., Pasquier, T., Crosas, M., 2022. A large-scale study on research code quality and execution. Sci Data 9, 60. https://doi.org/10.1038/s41597-022-01143-6

2. von Hippel, M., 2022. Crucial Computer Program for Particle Physics at Risk of Obsolescence. Quanta Magazine. https://www.quantamagazine.org/crucial-computer-program-for-particle-physics-at-risk-of-obsolescence-20221201/

3. Hirschler, B., 2014. Exclusive: Software issue casts doubt over data used to approve some drugs. Reuters. http://finance.yahoo.com/news/exclusive-software-issue-casts-doubt-over-data-used-121215317--finance.html

4. Eklund, A., Nichols, T.E., Knutsson, H., 2016. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proc Natl Acad Sci U S A 113, 7900–7905. https://doi.org/10.1073/pnas.1602413113

5. Vincent, J., 2020. Scientists rename human genes to stop Microsoft Excel from misreading them as dates [WWW Document]. The Verge. https://www.theverge.com/2020/8/6/21355674/human-genes-rename-microsoft-excel-misreading-dates

6. Vincent, J., 2022. AI-generated answers temporarily banned on coding Q&A site Stack Overflow. The Verge. https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers

Wednesday, January 25, 2023

ChatGPT Confabulates and is Unprofessional

Since my own explorations with ChatGPT in informatics knowledge last month, others using the system have racked up some impressive results. Most pertinent to biomedical informatics is ChatGPT's passing grades on medical student licensing exams. It has also performed well on Wharton MBA and multi-state bar exams.

Does this mean that machine intelligence has equaled human professionals? My subsequent encounters with ChatGPT lead me to maintain my skepticism. Not that ChatGPT is not a tremendous feat, but it is not (yet anyways) capable of performing as a physician, lawyer, or MBA.

This conclusion is evidenced from another recent interaction I had with ChatGPT, the transcript of which is at the bottom of this post. This interaction mirrors some characteristics that many others note. One property that many find is ChatGPT's tendency to confabulate. As seen in my latest interaction, it states things that are simply false. Not that humans do not do this, but no one would rely on a physician, lawyer, or MBA known to flagrantly lie. As such, ChatGPT fails at one core task of being a competent professional in a given field, where there is an ethical responsibility to recognize and state when you do not know something. The best physician is one who knows the limits of his or her knowledge and skills. (Of course, ChatGPT lacks many other capabilities of a physician, from technical skill to empathy.)

This leads me to a thought experiment, perhaps somewhat biased by the fact I am an information retrieval (IR) researcher. If you were confronted with a highly challenging intellectual task, which would you use first, Google (or another search system, maybe PubMed for medicine) or ChatGPT? In my mind, an IR system is much more appealing, giving what IR systems provide by definition, which is pointers to information that may answer questions. Google is far from a perfect search engine, but when addressing a complicated intellectual task, I would much rather take my chances with an IR system output with primary sources listed and accessible than the narrative without citations that comes out of ChatGPT.

I have no doubt that systems like ChatGPT will continue to advance, and that human knowledge workers will find ways to use them beneficially in their work. But I cannot imagine there not being a human in the loop somewhere, especially in medicine when you might need the skills of a surgeon, not to mention a physician with empathy and grace. I am also reminded of the quote from my colleague, radiologist Curtis Langlotz, probably speaking for all of medicine, who said, "Will AI replace radiologists? is the wrong question. The right answer is: Radiologists who use AI will replace radiologists who don’t."

Here is the transcript from my most recent interaction with ChatGPT, with my annotations in italics. I would also note, perhaps in a blow to my own ego, that ChatGPT's answer does not include any of my own IR writings, such as the recent edition of my biomedical IR textbook, which arguably are important to this field.

I entered to Chat GPT, What are some good readings for biomedical information retrieval?

There are many resources available for learning about biomedical information retrieval, including books, articles, and online courses. Here are a few suggestions:

"Introduction to Biomedical Information Retrieval" by C. J. van Rijsbergen is a classic textbook that provides a comprehensive introduction to the field.

Keith van Rijsbergen is a retired computer science professor and IR researcher who has written several books but not one called "Introduction to Biomedical Information Retrieval."

"Information Retrieval in Biomedicine" by Peter Willett is another comprehensive textbook that covers the basics of biomedical information retrieval as well as advanced topics.

Peter Willett is a retired computer science professor and IR researcher who has written several books but not one called "Information Retrieval in Biomedicine."

"Medical Information Retrieval" by J. V. McInerney is a book that provides an overview of the field and covers various topics such as search engines, natural language processing, and data mining.

I am not able to find any book titled "Medical Information Retrieval" or anyone in the IR or informatics field named J. V. McInerney.

"Biomedical Information Retrieval" by J. S. Ogasawara and M. R. Al-Rawi is a book that covers various aspects of biomedical information retrieval, including text mining, ontologies, and evaluation.

I am not able to find any book titled "Biomedical Information Retrieval" or anyone in the IR or informatics fields named J. S. Ogasawara or M. R. Al-Rawi.

"Biomedical Text Mining" by Haifeng Li and Linyun Fu is a book that covers the use of text mining techniques for extracting and analyzing information from biomedical literature.

There is a book named "Biomedical Text Mining" but it does have authors Haifeng Li or Linyun Fu, neither of whom work in IR or informatics.

The Journal of Biomedical Informatics is a peer-reviewed journal that publishes research on various aspects of biomedical information retrieval.

This is true!

The International Conference on Medical Information Retrieval is an annual conference that brings together researchers and practitioners in the field to present and discuss their work.

I am not aware of any such conference by this name.

I hope these suggestions are helpful! Let me know if you have any other questions.

I will!