Back to Search Start Over

Hahn-PCNN-CNN: an end-to-end multi-modal brain medical image fusion framework useful for clinical diagnosis.

Authors :
Guo, Kai
Li, Xiongfei
Hu, Xiaohan
Liu, Jichen
Fan, Tiehu
Source :
BMC Medical Imaging; 7/14/2021, Vol. 21 Issue 1, p1-22, 22p
Publication Year :
2021

Abstract

Background: In medical diagnosis of brain, the role of multi-modal medical image fusion is becoming more prominent. Among them, there is no lack of filtering layered fusion and newly emerging deep learning algorithms. The former has a fast fusion speed but the fusion image texture is blurred; the latter has a better fusion effect but requires higher machine computing capabilities. Therefore, how to find a balanced algorithm in terms of image quality, speed and computing power is still the focus of all scholars. Methods: We built an end-to-end Hahn-PCNN-CNN. The network is composed of feature extraction module, feature fusion module and image reconstruction module. We selected 8000 multi-modal brain medical images downloaded from the Harvard Medical School website to train the feature extraction layer and image reconstruction layer to enhance the network's ability to reconstruct brain medical images. In the feature fusion module, we use the moments of the feature map combined with the pulse-coupled neural network to reduce the information loss caused by convolution in the previous fusion module and save time. Results: We choose eight sets of registered multi-modal brain medical images in four diease to verify our model. The anatomical structure images are from MRI and the functional metabolism images are SPECT and 18F-FDG. At the same time, we also selected eight representative fusion models as comparative experiments. In terms of objective quality evaluation, we select six evaluation metrics in five categories to evaluate our model. Conclusions: The fusion image obtained by our model can retain the effective information in source images to the greatest extent. In terms of image fusion evaluation metrics, our model is superior to other comparison algorithms. In terms of time computational efficiency, our model also performs well. In terms of robustness, our model is very stable and can be generalized to multi-modal image fusion of other organs. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14712342
Volume :
21
Issue :
1
Database :
Complementary Index
Journal :
BMC Medical Imaging
Publication Type :
Academic Journal
Accession number :
151400394
Full Text :
https://doi.org/10.1186/s12880-021-00642-z