This week has been a milestone for this blog, with the number of page views surpassing the 100,000 mark since its inception in March, 2009. This achievement gives me a chance to reflect on my use of social media, which seems to be different than for others. Maybe social media is just like anything else in life, with different people preferring differing aspects and uses of it.
Clearly I have enjoyed being a blogger. This blog has provided me a nice platform from which to share my thoughts and views with a worldwide audience. As I have noted before, I am not a stream of consciousness blogger, feeling compelled to post things continuously, such as every day. Rather, I prefer that my postings carefully reflect thoughts and ideas on specific topics.
Another social media activity I enjoy is Facebook. I have three main networks on Facebook, and I enjoy seeing them interact with my personal and professional life. These networks include my professional colleagues, my family and friends, and my high school classmates. Facebook is also a great medium for sharing and annotating photos and other digital artifacts.
Two social media activities I personally find less valuable to myself are Twitter and LinkedIn. I know this is at odds with some dear friends and colleagues. However, tweets just seem too short (I often have more to say than can be expressed in 140 characters!) and fleeting (it seems you either catch something in the Twitter stream or never see it again) to sustain my interest. Sometimes I try to get involved in the Twitter dialogue at conferences, but soon find it distracting to try to otherwise participate in the meeting (whose primary value is usually the direct personal interaction). I have the most fun with it when I use it to editorialize about presentations at those meetings, but I find it difficult to sustain any sort of dialogue when doing that.
As for LinkedIn, while I am sure it is highly valuable for some people, I find that my major interaction with it is to receive requests for connections and endorsements. I am happy to connect with anyone on LinkedIn, but I have yet to find value in the hundreds of connections I have made. I also do not like to make generic LinkedIn endorsements, instead preferring to serve as real references for colleagues and current or former students when they need it for specific opportunities.
I know that other people have different preferences for social media, and perhaps my own preferences will change over time. And of course, it is likely that the social media tools and sites will change over time, or that new ones will emerge. For now, however, I will keep blogging and Facebooking while still trying to determine the value of other social media.
This blog maintains the thoughts on various topics related to biomedical and health informatics by Dr. William Hersh, Professor, Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University.
Tuesday, February 26, 2013
Sunday, February 24, 2013
Data Mining Systems Improve Cost and Quality of Healthcare - Or Do They?
Several email lists I am on were abuzz last week about the publication of a paper that was described in a press release from Indiana University to demonstrate that "machine learning -- the same computer science discipline that helped create voice recognition systems, self-driving cars and credit card fraud detection systems -- can drastically improve both the cost and quality of health care in the United States." The press release referred to a study published by an Indiana faculty member in the journal, Artificial Intelligence in Medicine [1].
While I am a proponent of computer applications that aim to improve the quality and cost of healthcare, I also believe we must be careful about the claims being made for them, especially those derived from results from scientific research.
After reading and analyzing the paper, I am skeptical of the claims made not only by the press release but also by the authors themselves. My concern is less about their research methods, although I have some serious qualms about them I will describe below, but more so with the press release that was issued by their university public relations office. Furthermore, as always seems to happen when technology is hyped, the press release was picked up and echoed across the Internet, followed by the inevitable conflation of its findings. Sure enough, one high-profile blogger wrote, "physicians who used an AI framework to make patient care decisions had patient outcomes that were 50 percent better than physicians who did not use AI." It is clear from the paper that physicians did not actually use such a framework, which was only applied retrospectively to clinical data.
What exactly did the study show? Basically, the researchers obtained a small data set for one clinical condition in one institution's electronic health record and applied some complex data mining techniques to show that lower cost and better outcomes could be achieved by following the options suggested by the machine learning algorithm instead of what the clinicians actually did. The claim, therefore, is that if the data mining were followed by the clinicians instead of their own decision-making, then better and cheaper care would ensue.
As done in many scientific papers about technology, the paper goes into exquisite detail about the data mining algorithms and the experiments comparing them. But the paper unfortunately provides very little description about the clinical data itself. There is a reference to another paper from a conference that appears to describe the data set [2], but it is still not clear how the data was applied to evaluate the algorithms.
I have a number of methodological problems with the paper. First is the paucity of clinical details about the data. The authors refer to a metric called the "outcomes rating scale" of the "client-directed outcome informed (CDOI) assessment." No details are provided as to exactly what this scale measures or how differences in measurement correlate with improved clinical outcome. Furthermore, the variables of the details of care for the patient that the data mining algorithm supposedly outperforms are not described either. Therefore anyone hoping to understand the clinical value that this approach is claimed to have improved is not able to do so.
A second problem is that there is no discussion about the cost data or what cost perspective (e.g., system, clinician, societal, etc.) is taken. This is a common problem that plagues many studies in healthcare that attempt to measure costs [3]. Given the relatively modest amounts of money spent on the care that is reported in their results, amounting only to a few hundred dollars per patient, it is unlikely that the data includes the full amount of the costs of treatment for each patient, or over an appropriate time period. If my interpretation of the low value of the cost data is correct (which is difficult to discern from reading the paper due, again due to lack of details), the data do not include the cost of clinician time, facilities, or longer-term costs beyond the time frame of the data set. If that is indeed the case, then it would be particularly problematic for a machine learning system, since such systems make inferences limited only to the data that is provided to the model. Therefore if poor data is provided to the model, its "conclusions" are suspect. (This raises a side issue as to whether there is truly "artificial intelligence" here, since the only intelligence applied by the system is the models developed by their human creators.)
A third concern is that this is a modeling study. As every evaluation methodologist knows, modeling studies are limited in their ability to assign cause and effect. There is certainly a role in informatics science for modeling studies, although we saw recently that such studies have their limits, especially when revisited over the long run. In this study, there may have been reasons for the clinicians following the more expensive path or confounding reasons why such patients had worse outcomes, but they cannot be captured by the approach used in this study.
This is related to the final and most serious problem of the work, which is that the modeling evaluation is a very weak form of evidence to demonstrate the value of an intervention. If the authors truly wanted to show the benefits of the system and approach they developed, they should have performed a randomized controlled trial that compared their intervention with an appropriate control group. This would have led to the type of study that the blogger mentioned above erroneously described this to be. Such a study design would assess some of the more vexing problems we face in informatics, such as whether the advice coming from a computer will change clinician behavior. Or, when such systems are introduced into the "real world," whether the "advice" provided will prospectively lead to better outcomes.
I do believe that the kind of work addressed by this paper is important, especially as we move into the area of personalized medicine. As eloquently described by Stead and colleagues, healthcare will soon be reaching the point where the number of data points required for clinical decisions will exceed the bounds of human cognition [4]. (It probably already has.) Therefore clinicians will require aids to their cognition provided by information systems, perhaps one like that described in the study.
But such aids require, like everything else in medicine, robust evaluative research to demonstrate their value. The methods used in this paper may indeed be the methods to provide this value, but the implementation and evaluation described miss the mark. That miss is further exacerbated by the hype and conflation the ensued after the paper was published.
What can we learn from this paper and its ensuing hype? First, bold claims require bold evidence to back them up. In the case of showing value for an approach in healthcare - be it test, treatment, or informatics application - we must use evaluation methods that provide best evidence for the claim. That is not always a randomized controlled trial, but in this situation, it would be, and the modeling techniques used are really just preliminary data that (might) justify an actual clinical trial. Second, when we perform technology evaluation, we need to describe, and ideally release, all of the clinical data so that others can analyze and even replicate the results. Finally, while we all want to disseminate the results of our research to the widest possible audience, we need to be realistic in explaining what we accomplished and what are its larger implications.
References
[1] Bennett, C. and K. Hauser (2013). Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach. Artificial Intelligence in Medicine. Epub ahead of print.
[2] Bennett, C., T. Doub, A. Bragg, J. Luellen, C. VanRegenmorter, J. Lockman and R. Reiserer (2011). Data mining session-based patient reported outcomes (PROs) in a mental health setting: toward data-driven clinical decision support and personalized treatment. 2011 First IEEE International Conference on Healthcare Informatics, Imaging and Systems Biology (HISB 2011), San Jose, CA. 229-236.
[3] Drummond, M. and M. Sculpher (2005). Common methodological flaws in economic evaluations. Medical Care. 43(7 Suppl): 5-14.
[4] Stead, W., J. Searle, H. Fessler, J. Smith and E. Shortliffe (2011). Biomedical informatics: changing what physicians need to know and how they learn. Academic Medicine. 86: 429-434.
While I am a proponent of computer applications that aim to improve the quality and cost of healthcare, I also believe we must be careful about the claims being made for them, especially those derived from results from scientific research.
After reading and analyzing the paper, I am skeptical of the claims made not only by the press release but also by the authors themselves. My concern is less about their research methods, although I have some serious qualms about them I will describe below, but more so with the press release that was issued by their university public relations office. Furthermore, as always seems to happen when technology is hyped, the press release was picked up and echoed across the Internet, followed by the inevitable conflation of its findings. Sure enough, one high-profile blogger wrote, "physicians who used an AI framework to make patient care decisions had patient outcomes that were 50 percent better than physicians who did not use AI." It is clear from the paper that physicians did not actually use such a framework, which was only applied retrospectively to clinical data.
What exactly did the study show? Basically, the researchers obtained a small data set for one clinical condition in one institution's electronic health record and applied some complex data mining techniques to show that lower cost and better outcomes could be achieved by following the options suggested by the machine learning algorithm instead of what the clinicians actually did. The claim, therefore, is that if the data mining were followed by the clinicians instead of their own decision-making, then better and cheaper care would ensue.
As done in many scientific papers about technology, the paper goes into exquisite detail about the data mining algorithms and the experiments comparing them. But the paper unfortunately provides very little description about the clinical data itself. There is a reference to another paper from a conference that appears to describe the data set [2], but it is still not clear how the data was applied to evaluate the algorithms.
I have a number of methodological problems with the paper. First is the paucity of clinical details about the data. The authors refer to a metric called the "outcomes rating scale" of the "client-directed outcome informed (CDOI) assessment." No details are provided as to exactly what this scale measures or how differences in measurement correlate with improved clinical outcome. Furthermore, the variables of the details of care for the patient that the data mining algorithm supposedly outperforms are not described either. Therefore anyone hoping to understand the clinical value that this approach is claimed to have improved is not able to do so.
A second problem is that there is no discussion about the cost data or what cost perspective (e.g., system, clinician, societal, etc.) is taken. This is a common problem that plagues many studies in healthcare that attempt to measure costs [3]. Given the relatively modest amounts of money spent on the care that is reported in their results, amounting only to a few hundred dollars per patient, it is unlikely that the data includes the full amount of the costs of treatment for each patient, or over an appropriate time period. If my interpretation of the low value of the cost data is correct (which is difficult to discern from reading the paper due, again due to lack of details), the data do not include the cost of clinician time, facilities, or longer-term costs beyond the time frame of the data set. If that is indeed the case, then it would be particularly problematic for a machine learning system, since such systems make inferences limited only to the data that is provided to the model. Therefore if poor data is provided to the model, its "conclusions" are suspect. (This raises a side issue as to whether there is truly "artificial intelligence" here, since the only intelligence applied by the system is the models developed by their human creators.)
A third concern is that this is a modeling study. As every evaluation methodologist knows, modeling studies are limited in their ability to assign cause and effect. There is certainly a role in informatics science for modeling studies, although we saw recently that such studies have their limits, especially when revisited over the long run. In this study, there may have been reasons for the clinicians following the more expensive path or confounding reasons why such patients had worse outcomes, but they cannot be captured by the approach used in this study.
This is related to the final and most serious problem of the work, which is that the modeling evaluation is a very weak form of evidence to demonstrate the value of an intervention. If the authors truly wanted to show the benefits of the system and approach they developed, they should have performed a randomized controlled trial that compared their intervention with an appropriate control group. This would have led to the type of study that the blogger mentioned above erroneously described this to be. Such a study design would assess some of the more vexing problems we face in informatics, such as whether the advice coming from a computer will change clinician behavior. Or, when such systems are introduced into the "real world," whether the "advice" provided will prospectively lead to better outcomes.
I do believe that the kind of work addressed by this paper is important, especially as we move into the area of personalized medicine. As eloquently described by Stead and colleagues, healthcare will soon be reaching the point where the number of data points required for clinical decisions will exceed the bounds of human cognition [4]. (It probably already has.) Therefore clinicians will require aids to their cognition provided by information systems, perhaps one like that described in the study.
But such aids require, like everything else in medicine, robust evaluative research to demonstrate their value. The methods used in this paper may indeed be the methods to provide this value, but the implementation and evaluation described miss the mark. That miss is further exacerbated by the hype and conflation the ensued after the paper was published.
What can we learn from this paper and its ensuing hype? First, bold claims require bold evidence to back them up. In the case of showing value for an approach in healthcare - be it test, treatment, or informatics application - we must use evaluation methods that provide best evidence for the claim. That is not always a randomized controlled trial, but in this situation, it would be, and the modeling techniques used are really just preliminary data that (might) justify an actual clinical trial. Second, when we perform technology evaluation, we need to describe, and ideally release, all of the clinical data so that others can analyze and even replicate the results. Finally, while we all want to disseminate the results of our research to the widest possible audience, we need to be realistic in explaining what we accomplished and what are its larger implications.
References
[1] Bennett, C. and K. Hauser (2013). Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach. Artificial Intelligence in Medicine. Epub ahead of print.
[2] Bennett, C., T. Doub, A. Bragg, J. Luellen, C. VanRegenmorter, J. Lockman and R. Reiserer (2011). Data mining session-based patient reported outcomes (PROs) in a mental health setting: toward data-driven clinical decision support and personalized treatment. 2011 First IEEE International Conference on Healthcare Informatics, Imaging and Systems Biology (HISB 2011), San Jose, CA. 229-236.
[3] Drummond, M. and M. Sculpher (2005). Common methodological flaws in economic evaluations. Medical Care. 43(7 Suppl): 5-14.
[4] Stead, W., J. Searle, H. Fessler, J. Smith and E. Shortliffe (2011). Biomedical informatics: changing what physicians need to know and how they learn. Academic Medicine. 86: 429-434.
Friday, February 22, 2013
AMIA Clinical Informatics Board Review Course Announced
This week, the American Medical Informatics Association (AMIA) released the details of the Clinical Informatics Board Review Course (CIBRC), for which I will serving as Course Director. I am excited to see this come to fruition, and look forward not only to these course offerings but also the expansion of the program as other certifications in informatics for non-physicians become reality in the years ahead.
The course will be offered four times in person before the first offering of the certification exam in October. There will also be an online version of the course that can be accessed in tandem with the live courses or taken alone. A question bank of practice questions will also be made available. The four offerings of the course will be:
For those with less knowledge of the field, a better approach might be to start with a course that more fundamentally builds their knowledge base. The 10x10 ("ten by ten") course is option, although even it is only a single course and may not be enough for those with little or no formal training in the field. The next Oregon Health & Science University (OHSU) offering of its 10x10 course runs from April through July; registration is available on the AMIA Web site. For those individuals desiring more training, a more comprehensive course of study leading to a Graduate Certificate or master's degree may be a better option. AMIA has a database of educational programs, which includes our program at OHSU.
The rollout of the course is an exciting event. For convenience, here is an index to some of my important previous postings on the clinical informatics subspecialty:
The course will be offered four times in person before the first offering of the certification exam in October. There will also be an online version of the course that can be accessed in tandem with the live courses or taken alone. A question bank of practice questions will also be made available. The four offerings of the course will be:
- April 12-14 - Bethesda, MD (registration has opened for this course and will soon be available for the others)
- June 7-9 - Philadelphia, PA
- August 9-11 - Portland, OR (I am thrilled to have one offering of the course in my own city!)
- September 7-9 - Rosemont (Chicago), IL
- Overview of the course
- Course Design
- Faculty
- Frequently asked questions
- History of the course and subspecialty
For those with less knowledge of the field, a better approach might be to start with a course that more fundamentally builds their knowledge base. The 10x10 ("ten by ten") course is option, although even it is only a single course and may not be enough for those with little or no formal training in the field. The next Oregon Health & Science University (OHSU) offering of its 10x10 course runs from April through July; registration is available on the AMIA Web site. For those individuals desiring more training, a more comprehensive course of study leading to a Graduate Certificate or master's degree may be a better option. AMIA has a database of educational programs, which includes our program at OHSU.
The rollout of the course is an exciting event. For convenience, here is an index to some of my important previous postings on the clinical informatics subspecialty:
- Eligibility for the subspecialty (January, 2013)
- Some challenges for building capacity of the subspecialty (September, 2012)
- Mapping the OHSU informatics program content to the core content (July, 2012)
Wednesday, February 13, 2013
From HITECH to Accountable Care: A Student and Workforce Development Program Success Story
This year's State of the Union address by President Barack Obama noted that controlling the costs of healthcare, particularly of Medicare and Medicaid, is a critical element of addressing the government's debt problem, especially in the long run. A key element of this approach, as the President noted, will be new models of care delivery. As he stated, "medical bills shouldn’t be based on the number of tests ordered or days spent in the hospital; they should be based on the quality of care that our seniors receive." One of the key elements for operationalizing this approach is the development of accountable care organizations (ACOs) [1], whose goals are to achieve Dr. Donald Berwick's "triple aim" of improved health, improved healthcare delivery, and reduced cost [2].
To this end, President Obama invited Oregon's Governor, Dr. John Kitzhaber, to be present at his address. Dr. Kitzhaber's presence acknowledged his leadership and innovation in healthcare reform that is taking place in Oregon. With help from the Obama Administration, Oregon is revamping its entire Medicaid program under a new brand of ACOs known as Coordinated Care Organizations (CCOs). The Oregon CCOs will provide value-driven coordinated care for Oregon Medicaid patients in pursuit of the "triple aim." As with all ACOs, managing information, including that beyond the electronic health record (EHR), will be critical for the success of Oregon's CCOs. Indeed, a recent post by Dr. John Halamka posited that "ACO = HIE + analytics," a shorthand way of stating that ACOs (and CCOs) will require robust health information exchange (HIE) and data analytics.
The importance of health information technology (HIT) to accountable care was recognized in the Health Information Technology for Clinical and Economic Health (HITECH) Act when it was passed in 2009 [3]. Indeed, one of the original roles for HITECH was to serve as a "down payment" for healthcare reform [4].
I am pleased to report of one recent instance, small but significant, where HITECH funding has indeed resulted in an outcome that is helping healthcare reform and accountable care. It is the story of two individuals who pursued our University-Based Training (UBT) program and have both been hired by Health Share of Oregon, the CCO for the Portland, Oregon tri-county region.
Isolde Knaap worked in the State of Oregon's Department of Human Services for over 20 years in various positions as a data analyst/system developer/research analyst. During that time, she built a wealth of experience managing data and systems in public health and child welfare. Aiming to advance her skills and career more into the HIT realm, she applied and was accepted into the Oregon Health & Science University (OHSU) Graduate Certificate Program, funded by the UBT training grant. Her previous educational background include a Bachelor of Arts in Modern Languages with a secondary teaching certificate.
After graduation from the program, Isolde was hired by Health Share of Oregon as a Senior IT Project Manager. Like all CCOs, Health Share Oregon is tasked to improve health outcomes and reduce health cost through collaboration with previously disparate health care providers. Isolde's position will help align IT processes so that the care delivery transformation activities of Health Share Oregon achieve the triple-aim goals while also achieving administrative simplification.
One of the courses in her program of study Isolde found most useful was Introduction to Standards and Interoperability. This gave her an appreciation for the challenges of health information exchange, a critical function for CCOs who share financial accountability for providing care for their patients. She stated that she was asked in various interviews what she knew about EHRs, and replied that she could confidently state that she had taken the courses, Clinical Information Systems and Clinical Information Systems Laboratory, and later served as a teaching assistant in the latter. She also purposefully chose a practicum with a local Veteran's Administration hospital to get exposure to their VistA EHR (which was used in the Clinical Information Systems Laboratory course).
Charles Sorgie is another UBT graduate who has been hired by Health Share of Oregon, serving as a Senior IT Business Analyst. Charles has a long history of work in the IT field as a software developer, designer, and architect. He had no experience in the healthcare domain but had developed a documentation management system to support electronic design automation that was subsequently adopted by the aerospace industry to manage their maintenance documentation. In addition, he was involved more recently in the process modeling and enterprise modeling fields, which led him to become interested in ways to organize and model the processes, resources, and work products involved in complex business interactions. His educational experience was a Master of Science in Computer Science.
In his position with Health Share of Oregon, Charles is involved in the gathering of stakeholder requirements and acting as a coordinator and liaison between those stakeholders and the IT teams tasked with the deployment of solutions to satisfy those requirements. He notes that the OHSU UBT program proved him a basic understanding of the challenges in the support of clinical processes, their privacy and security requirements, their workflow, and the standards used to support that workflow (e.g., HL7). It also gave him a background to the basic terminology used (e.g., personal health information, or PHI) as well as the challenges surrounding the business processes that support those clinical processes. All of this helps him be more productive in his current position.
Both Isolde and Charles report to Daniel Dean, the Chief Information Officer (CIO) of Health Share of Oregon. Their connection to Mr. Dean was made possible by another resource provided by our UBT grant, which is Virginia Lankes, who is the Career Development Specialist that the grant enabled us to hire. The connections Ms. Lankes had nurtured with Mr. Dean made him aware of our two students as they were completing their studies.
Isolde and Charles are thus examples of how the UBT program has developed the HIT workforce and how the HITECH program is contributing to healthcare reform. The two of them also provide a refreshing counterexample to the common adage that one must have a clinical background to succeed in biomedical and health informatics. As I have written, the work of informatics is shifting from implementation to data, and their experience and expertise in using data seems to have played an important role in their new positions.
The story of Isolde and Charles provide more examples of how the UBT funding of our educational program has helped individuals advance their careers and added jobs to the economy. Their experiences build on the successful outcomes that other students in the program have had, as I documented in 2011 and 2012.
Postscript: Sure enough, the day after this posting, an article appeared in the New England Journal of Medicine, describing the Oregon CCO program [5]. It is a well-written overview and freely available, so I will add this postscript to provide this additional information.
References
1. Berwick, D. (2011). Making good on ACOs' promise--the final rule for the Medicare shared savings program. New England Journal of Medicine 365: 1753-1756.
2. Berwick, D., T. Nolan and J. Whittington (2008). The triple aim: care, health, and cost. Health Affairs 27: 759-769.
3. Blumenthal, D. (2010). Launching HITECH. New England Journal of Medicine 362: 382-385.
4. Blumenthal, D. (2009). Stimulating the adoption of health information technology. New England Journal of Medicine 360: 1477-1479.
5. Stecker, E. (2013). The Oregon ACO experiment — bold design, challenging execution. New England Journal of Medicine Epub ahead of print.
Monday, February 11, 2013
The Informatics Professor on Video
Some video interviews of me have appeared on the Web. They are part of some interesting and worthy sites.
The first video was an appearance on the Web-based show, Health IT Live!, which features a host of informatics experts from around the world. I took part in Healthcare IT Live! Episode #6. The interview discussed a variety of topics, mostly related to my research and educational work in biomedical and health informatics.
I was also recently interviewed for a site called AskimoTV, which purports to provide access to experts, initially via video interviews and later to those wanting one-on-one consultations. My interview was done in three topical segments:
1. Biomedical and Health informatics: Improving healthcare through information
2. Barriers to Retrieving Patient Information from Electronic Health Record
3. Building A Health Informatics Workforce In Developing Countries
These interviews demonstrate the simple power of the Web and its ability to disseminate information in ways we could not have imagined a couple decades ago.
Saturday, February 2, 2013
Marketing the OHSU Informatics Program
The Oregon Health & Science University (OHSU) Biomedical Informatics Graduate Program is undertaking a new marketing campaign. This campaign is, of course, a business decision, but I also find it a great opportunity to spread the word about our program as well as careers in the profession of informatics.
We are especially interested in touting the benefits of education and careers in informatics for younger people. The mid-career individuals who enter our program mostly know well the problems in healthcare and how informatics can address them. But we find that younger people, probably because of their less experience with the healthcare system (and its dysfunction), are not aware of how informatics can not only lead to a rewarding career but also benefit health, improve healthcare delivery, and advance basic and clinical research. There are also opportunities in bioinformatics and computational biology of which they may actually be more aware. The graphic below shows our message.
As such, the focus of this marketing campaign aims at younger people. Not that we do not want to continue to attract the more common mid-career healthcare or other professional who enters the clinical informatics track of our program, but our marketing focus is aimed at those who are younger and probably have little knowledge of healthcare, biomedical research, or genomics. The campaign aims at both of the major tracks in our program, clinical informatics and bioinformatics.
Another reason for the challenge in marketing our program is the complexity of its tracks, degrees and certificates, and other aspects.
An additional message we are promoting concerns the value of our program. We note that while our tuition is comparable with most other programs, we feature a large and world-renowned faculty, great longevity of over 20 years, and nearly 500 alumni who have successfully obtained employment in a wide variety of settings.
The components of the marketing campaign include:
We will see how this translates to new student enrollment, especially among the younger people who will someday be users, implementers, and leaders of the new data-rich, information-driven healthcare system that guides our vision now.
We are especially interested in touting the benefits of education and careers in informatics for younger people. The mid-career individuals who enter our program mostly know well the problems in healthcare and how informatics can address them. But we find that younger people, probably because of their less experience with the healthcare system (and its dysfunction), are not aware of how informatics can not only lead to a rewarding career but also benefit health, improve healthcare delivery, and advance basic and clinical research. There are also opportunities in bioinformatics and computational biology of which they may actually be more aware. The graphic below shows our message.
Another reason for the challenge in marketing our program is the complexity of its tracks, degrees and certificates, and other aspects.
An additional message we are promoting concerns the value of our program. We note that while our tuition is comparable with most other programs, we feature a large and world-renowned faculty, great longevity of over 20 years, and nearly 500 alumni who have successfully obtained employment in a wide variety of settings.
The components of the marketing campaign include:
- A banner ad on Radiolab.org - This Web site, run by New York Public Radio, features podcasts on a variety of topics around science and arts. In addition to having the banner add to click-through to our Web site, our program is also acknowledged at the beginning and in the middle of each podcast (here is an example).
- Print ads in west coast college newspapers - Print ads are a challenge these days, as readership of college newspapers has declined as with all print media. But such ads do still reach our target audience, and will be supplemented by our usual attendance of graduate school fairs and other events at these colleges. The ad we are running is shown above.
- Google Adwords campaign
- Ad placement on Web pages via Quantcast
- Ongoing Webinars - I gave our first Webinar on January 31, 2013, which is available for viewing, along with the slide deck.
We will see how this translates to new student enrollment, especially among the younger people who will someday be users, implementers, and leaders of the new data-rich, information-driven healthcare system that guides our vision now.