Abstract
The TREC Genomics Track implemented a new task in 2006 that focused on passage retrieval for question answering using full-text documents from the biomedical literature. A test collection of 162,259 full-text documents and 28 topics expressed as questions was assembled. Systems were required to return passages that contained answers to the questions. Expert judges determined the relevance of passages and grouped them into aspects identified by one or more Medical Subject Headings (MeSH) terms. Document relevance was defined by the presence of one or more relevant aspects. The performance of submitted runs was scored using mean average precision (MAP) at the passage, aspect, and document level. In general, passage MAP was low, while aspect and document MAP were somewhat higher.
Original language | English (US) |
---|---|
Pages (from-to) | 52-78 |
Number of pages | 27 |
Journal | NIST Special Publication |
State | Published - 2006 |
Event | 15th Text REtrieval Conference, TREC 2006 - Gaithersburg, MD, United States Duration: Nov 14 2006 → Nov 17 2006 |
ASJC Scopus subject areas
- Engineering(all)