TY - JOUR
T1 - Overview of the CLEF 2011 medical image classification and retrieval tasks
AU - Kalpathy-Cramer, Jayashree
AU - Muller, Henning
AU - Bedrick, Steven
AU - Eggel, Ivan
AU - De Herrera, Alba G.Seco
AU - Tsikrika, Theodora
PY - 2011
Y1 - 2011
N2 - The eighth edition of the ImageCLEF medical retrieval task was organized in 2011. A subset of the open access collection of PubMed Central was used as the database in 2011. This database contains 231,000 images and is substantially larger than previously used collections. Additionally, there was a larger fraction of non-clinical images such as graphs and charts. As in 2010, we had -three subtasks: modality classification, image-based and case-based retrieval. A new, simple hierarchy for article figures was created. Our belief is that the use of the detected modality should help filter out non-relevant images, thereby improving precision. The goal of the image-based retrieval task was to retrieve an ordered set of images from the collection that best meet the information need specified as a textual statement and a set of sample images, while the goal of the case-based retrieval task was to return an ordered set of articles (rather than images) that best meet the information need provided as a description of a case. The number of registrations to the medical task increased to 55 research groups. However, groups submitting runs have remained stable at 17, with the number of submitted runs increasing to 207. Of these, 130 were image-based retrieval runs, 43 were case-based runs while the remaining 34 were modality classification runs. Combining textual and visual cues most often led to best results, but results fusion needs to be used with care.
AB - The eighth edition of the ImageCLEF medical retrieval task was organized in 2011. A subset of the open access collection of PubMed Central was used as the database in 2011. This database contains 231,000 images and is substantially larger than previously used collections. Additionally, there was a larger fraction of non-clinical images such as graphs and charts. As in 2010, we had -three subtasks: modality classification, image-based and case-based retrieval. A new, simple hierarchy for article figures was created. Our belief is that the use of the detected modality should help filter out non-relevant images, thereby improving precision. The goal of the image-based retrieval task was to retrieve an ordered set of images from the collection that best meet the information need specified as a textual statement and a set of sample images, while the goal of the case-based retrieval task was to return an ordered set of articles (rather than images) that best meet the information need provided as a description of a case. The number of registrations to the medical task increased to 55 research groups. However, groups submitting runs have remained stable at 17, with the number of submitted runs increasing to 207. Of these, 130 were image-based retrieval runs, 43 were case-based runs while the remaining 34 were modality classification runs. Combining textual and visual cues most often led to best results, but results fusion needs to be used with care.
UR - http://www.scopus.com/inward/record.url?scp=84922032536&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84922032536&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:84922032536
SN - 1613-0073
VL - 1177
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2011 Cross Language Evaluation Forum Conference, CLEF 2011
Y2 - 19 September 2011 through 22 September 2011
ER -