Abstract
We present results from Oregon Health & Science University's participation in the medical retrieval task of Image CLEF 2009. This year, we focused on improving retrieval performance, especially early precision, in the task of solving medical multimodal queries. These queries contain visual data, given as a set of image-examples, and textual data, provided as a set of words belonging to three dimensions: Anatomy, Pathology, and Modality. To solve these queries, we use both textual and visual data in order to better interpret the semantic content of the queries. Indeed, using the textual data associated with the image, it is relatively easy to extract anatomy and pathology, but it is challenging to extract the modality, since this is not always explicitly described in the text. To overcome this problem, we utilized the visual data. We combined both text-based and visual-based search techniques to provide a unique ranked list of relevant documents for each query. The obtained results showed that our approach outperforms our baseline by 43% in MAP and 71% in precision at top 5 documents. This is due to the use of domain dimensions and the combination of both visual-based and text-based search techniques.
Original language | English (US) |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 1175 |
State | Published - Jan 1 2009 |
Event | 2009 Cross Language Evaluation Forum Workshop, CLEF 2009, co-located with the 13th European Conference on Digital Libraries, ECDL 2009 - Corfu, Greece Duration: Sep 30 2009 → Oct 2 2009 |
Keywords
- Domain dimensions
- Image classification
- Image modality extraction
- Medical image retrieval
- Performance evaluation
ASJC Scopus subject areas
- Computer Science(all)