TY - GEN
T1 - Multimodal medical image retrieval OHSU at imageCLEF 2008
AU - Kalpathy-Cramer, Jayashree
AU - Bedrick, Steven
AU - Hatt, William
AU - Hersh, William
PY - 2009
Y1 - 2009
N2 - We present results from the Oregon Health & Science University's participation in the medical retrieval task of ImageCLEF 2008. Our web-based retrieval system was built using a Ruby on Rails framework. Ferret, a Ruby port of Lucene was used to create the full-text based index and search engine. In addition to the textual index of annotations, supervised machine learning techniques using visual features were used to classify the images based on image acquisition modality. Our system provides the user with a number of search options including the ability to limit their search by modality, UMLS-based query expansion, and Natural Language Processing-based techniques. Purely textual runs as well as mixed runs using the purported modality were submitted. We also submitted interactive runs using user specified search options. Although the use of the UMLS metathesaurus increased our recall, our system is geared towards early precision. Consequently, many of our multimodal automatic runs using the custom parser as well as interactive runs had high early precision including the highest P10 and P30 among the official runs. Our runs also performed well using the bpref metric, a measure that is more robust in the case of incomplete judgments.
AB - We present results from the Oregon Health & Science University's participation in the medical retrieval task of ImageCLEF 2008. Our web-based retrieval system was built using a Ruby on Rails framework. Ferret, a Ruby port of Lucene was used to create the full-text based index and search engine. In addition to the textual index of annotations, supervised machine learning techniques using visual features were used to classify the images based on image acquisition modality. Our system provides the user with a number of search options including the ability to limit their search by modality, UMLS-based query expansion, and Natural Language Processing-based techniques. Purely textual runs as well as mixed runs using the purported modality were submitted. We also submitted interactive runs using user specified search options. Although the use of the UMLS metathesaurus increased our recall, our system is geared towards early precision. Consequently, many of our multimodal automatic runs using the custom parser as well as interactive runs had high early precision including the highest P10 and P30 among the official runs. Our runs also performed well using the bpref metric, a measure that is more robust in the case of incomplete judgments.
UR - http://www.scopus.com/inward/record.url?scp=70549113744&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=70549113744&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-04447-2_96
DO - 10.1007/978-3-642-04447-2_96
M3 - Conference contribution
AN - SCOPUS:70549113744
SN - 3642044468
SN - 9783642044465
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 744
EP - 751
BT - Evaluating Systems for Multilingual and Multimodal Information Access - 9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008, Revised Selected Papers
T2 - 9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008
Y2 - 17 September 2008 through 19 September 2008
ER -