Monday, August 1, 2011

Identifying Patients for Clinical Studies from Electronic Health Records: The TREC Medical Records Track

The substantial federal investment devoted to electronic health record (EHR) adoption in the Health Information Technology for Economic and Clinical Health (HITECH) Act brings many potential benefits to health care. In addition to the improved availability of information about patients during the delivery of care is the ability to better “learn” from what we do in health care so we can better understand what works and what does not [1]. This is one aspect of how we will benefit from the secondary use (or re-use) of clinical data in EHRs [2].

Another substantial federal health care-related investment is in “comparative effectiveness research” (CER), which focuses medical research (e.g., clinical trials) on critical health care-related questions in head-to-head comparisons in real-world settings [3]. A total of  $1.4 billion of funding in the American Recovery and Reinvestment Act (ARRA) was allocated for CER, with a mandate to establish the Patient-Centered Outcomes Research Institute (PCORI), a public-private entity to prioritize the investment in CER. One of the first products of the government’s CER efforts was a list of the top 100 priority clinical conditions, developed by the Institute of Medicine (IOM),  to guide CER efforts and funding at the federal level.

In the meantime, there have been other federal investments in using health IT to facilitate clinical research. One of these is the National Institutes of Health (NIH) Clinical and Translational Research Award (CTSA) program, which funds 60 centers nationwide to facilitate translational research. Another effort comes from the Strategic Health IT Advanced Research Projects (SHARP) Program of the HITECH Act, which funds four priority areas of research in health IT, including the secondary use of clinical (including text) data.

Against this backdrop of government and other investment in health information technology comes a new track in the Text Retrieval Conference (TREC), an annual challenge evaluation hosted by the US National Institute for Standards & Technology (NIST). TREC is a long-standing event that builds “test collections” allowing different approaches to information retrieval (IR) to be assessed in an open and comparable manner. Each year, a number of “tracks” are held within TREC devoted to different aspects of IR, such as Web searching or cross-language IR [4]. While TREC is focused on general IR, there have been some tracks devoted to IR in specific domains, one of which in the past was genomics [5].

This year, TREC has launched a Medical Records Track. With TREC’s focus on IR, the goal of the track is to develop a task that is both pertinent to real-world clinical medicine and within the scope of IR research. The track is fortunate to have received access to a large corpus of medical text that has been de-identified. These documents are organized as visits (or encounters). The de-identification process prevents linking multiple visits for a single patient. The retrieval task in the first year of the TREC Medical Records Track will be one of retrieving cohorts of patients who would fit criteria to participate in clinical studies. The retrieval “topics” will come from the IOM list of CER priority conditions, modified to create unambiguous and an appropriate quantity of retrieved documents. OHSU has received a grant from NIST to organize the topic development and relevance assessment processes of the track.

The documents for the task come from the University of Pittsburgh NLP Repository, a repository of 95,702 de-identified clinical reports available for NLP research purposes. The reports were generated from multiple hospitals during 2007, and are grouped into “visits” consisting of one or more reports from the patient’s hospital stay. Each document is formatted in XML, with a cross-walk table that matches one or more documents to visits. There are a total of 17,199 visits.

Each document contains four sources of information that can be used for the task:
  • Chief complaint
  • Admit diagnosis (as ICD-9 code)
  • Discharge diagnosis(es) (as ICD-9 code)
  • Report text
The documents come from a number of different report types:
  • Radiology Reports - 47,555
  • History and Physical Exams - 15,721
  • Emergency Department Reports - 13,424
  • Progress Notes - 8,538
  • Discharge Summaries - 7,931
  • Operative Reports - 5,032
  • Surgical Pathology Reports - 2,877
  • Cardiology Reports - 632
  • Letter - 1
The task will require relevance assessments for each visit, with retrieval performance measured by recall, precision, and related measures (e.g., mean average precision – MAP) based on the assessments. As with all TREC relevance assessments, retrieved visits will be pooled based on the top N documents for each run of each participating group, where N is a number that will yield a pool of about 300-400 documents for assessment. The test collection will contain 35 topics.

The relevance assessment process will proceed similar to the typical TREC approach. Retrieved documents will be assessed by relevance judges who have clinical backgrounds. They will assess for each topic whether a visit is definitely relevant (patient would meet the criteria to be a subject in a clinical study), possibly relevant (patient might meet the criteria to be a subject in a clinical study), or not relevant (patient would not meet the criteria to be a subject in a clinical study). We will ideally have one person perform all the relevance assessments for a given topic.

I have had the opportunity to be involved in leading a number of IR challenge evaluations over the years, not only in genomics, but also devoted to interactive IR [6] as well as retrieval of medical images [7]. The TREC Medical Records Track is very timely given the growing interest in leveraging the large ongoing investment in EHRs and working toward a learning health system.

References

1. Friedman, C., Wong, A., et al. (2010). Achieving a nationwide learning health system. Science Translational Medicine, 2(57): 57cm29.
2. Safran, C., Bloomrosen, M., et al. (2007). Toward a national framework for the secondary use of health data: an American Medical Informatics Association white paper. Journal of the American Medical Informatics Association, 14: 1-9.
3. Murray, R. and McElwee, N. (2010). Comparative effectiveness research: critically intertwined with health care reform and the future of biomedical innovation. Archives of Internal Medicine, 170: 596-599.
4. Voorhees, E. and Harman, D., eds. (2005). TREC:  Experiment and Evaluation in Information Retrieval. Cambridge, MA. MIT Press.
5. Hersh, W. and Voorhees, E. (2009). TREC genomics special issue overview. Information Retrieval, 12: 1-15.
6. Hersh, W. (2001). Interactivity at the Text Retrieval Conference (TREC). Information Processing and Management, 37: 365-366.
7. Hersh, W., Müller, H., et al. (2009). The ImageCLEFmed medical image retrieval task test collection. Journal of Digital Imaging, 22: 648-655.

No comments:

Post a Comment