Medical image retrieval and automatic annotation: OHSU at ImageCLEF 2007

Jayashree Kalpathy-Cramer, William Hersh

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Oregon Health & Science University participated in the medical retrieval and medical annotation tasks of ImageCLEF 2007. In the medical retrieval task, we created a webbased retrieval system for the collection built on a full-text index of both image and case annotations. The text-based search engine was implemented in Ruby using Ferret, a port of Lucene, and a custom query parser. In addition to this textual index of annotations, supervised machine learning techniques using visual features were used to classify the images based on image acquisition modality. All images were annotated with the purported modality. Purely textual runs as well as mixed runs using the purported modality were submitted. Our runs performed moderately well using the MAP metric and better for the early precision (P10) metric. In the automatic annotation task, we used the 'gist' technique to create the feature vectors. Using statistics derived from a set of multi-scale oriented filters, we created a 512 dimensional vector. PCA was then used to create a 100-dimensional vector. This feature vector was fed into a two layer neural network. Our error rate on the 1000 test images was 67.8 using the hierarchical error calculations.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
Volume1173
StatePublished - 2007
Event2007 Cross Language Evaluation Forum Workshop, CLEF 2007, co-located with the 11th European Conference on Digital Libraries, ECDL 2007 - Budapest, Hungary
Duration: Sep 19 2007Sep 21 2007

Keywords

  • Image modality classification
  • Neural networks
  • Query parsing
  • Text retrieval

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Medical image retrieval and automatic annotation: OHSU at ImageCLEF 2007'. Together they form a unique fingerprint.

Cite this