Back to Search
Start Over
Artificial intelligence-Enabled deep learning model for multimodal biometric fusion.
- Source :
- Multimedia Tools & Applications; Oct2024, Vol. 83 Issue 33, p80105-80128, 24p
- Publication Year :
- 2024
-
Abstract
- The goal of information security is to prevent unauthorized access to data. There are several conventional ways to confirm user identity, such as using a password, user name, and keys. These conventional methods are rather limited; they can be stolen, lost, copied, or cracked. Because multimodal biometric identification systems are more secure and have higher recognition efficiency than unimodal biometric systems, they get attention. Single-modal biometric recognition systems perform poorly in real-world public security operations because of poor biometric data quality. Some of the drawbacks of current multimodal fusion methods include low generalization and single-level fusion. This study presents a novel multimodal biometric fusion model that significantly enhances accuracy and generalization through the power of artificial intelligence. Various fusion methods, encompassing pixel-level, feature-level, and score-level fusion, are seamlessly integrated through deep neural networks. At the pixel level, we employ spatial, channel, and intensity fusion strategies to optimize the fusion process. On the feature level, modality-specific branches and jointly optimized representation layers establish robust dependencies between modalities through backpropagation. Finally, intelligent fusion techniques, such as Rank-1 and modality evaluation, are harnessed to blend matching scores on the score level. To validate the model's effectiveness, we construct a virtual homogeneous multimodal dataset using simulated operational data. Experimental results showcase significant improvements compared to single-modal algorithms, with a remarkable 2.2 percentage point increase in accuracy achieved through multimodal feature fusion. The score fusion method surpasses single-modal algorithms by 3.5 percentage points, reaching an impressive retrieval accuracy of 99.6%. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 13807501
- Volume :
- 83
- Issue :
- 33
- Database :
- Complementary Index
- Journal :
- Multimedia Tools & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 180131800
- Full Text :
- https://doi.org/10.1007/s11042-024-18509-0