Back to Search
Start Over
MF‐Re‐Rank: A modality feature‐based Re‐Ranking model for medical image retrieval.
- Source :
- Journal of the Association for Information Science & Technology; Sep2018, Vol. 69 Issue 9, p1095-1108, 14p
- Publication Year :
- 2018
-
Abstract
- One of the main challenges in medical image retrieval is the increasing volume of image data, which render it difficult for domain experts to find relevant information from large data sets. Effective and efficient medical image retrieval systems are required to better manage medical image information. Text‐based image retrieval (TBIR) was very successful in retrieving images with textual descriptions. Several TBIR approaches rely on models based on bag‐of‐words approaches, in which the image retrieval problem turns into one of standard text‐based information retrieval; where the meanings and values of specific medical entities in the text and metadata are ignored in the image representation and retrieval process. However, we believe that TBIR should extract specific medical entities and terms and then exploit these elements to achieve better image retrieval results. Therefore, we propose a novel reranking method based on medical‐image‐dependent features. These features are manually selected by a medical expert from imaging modalities and medical terminology. First, we represent queries and images using only medical‐image‐dependent features such as image modality and image scale. Second, we exploit the defined features in a new reranking method for medical image retrieval. Our motivation is the large influence of image modality in medical image retrieval and its impact on image‐relevance scores. To evaluate our approach, we performed a series of experiments on the medical ImageCLEF data sets from 2009 to 2013. The BM25 model, a language model, and an image‐relevance feedback model are used as baselines to evaluate our approach. The experimental results show that compared to the BM25 model, the proposed model significantly enhances image retrieval performance. We also compared our approach with other state‐of‐the‐art approaches and show that our approach performs comparably to those of the top three runs in the official ImageCLEF competition. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 23301635
- Volume :
- 69
- Issue :
- 9
- Database :
- Complementary Index
- Journal :
- Journal of the Association for Information Science & Technology
- Publication Type :
- Academic Journal
- Accession number :
- 131499577
- Full Text :
- https://doi.org/10.1002/asi.24045