18 results
Search Results
2. Preprocessing of Iris Images for BSIF-Based Biometric Systems: Binary Detected Edges and Iris Unwrapping.
- Author
-
Rubio, Arthur and Magnier, Baptiste
- Subjects
IRIS recognition ,BIOMETRIC identification ,COMPUTER vision ,FEATURE extraction ,DATABASES ,HOUGH transforms - Abstract
This work presents a novel approach to enhancing iris recognition systems through a two-module approach focusing on low-level image preprocessing techniques and advanced feature extraction. The primary contributions of this paper include: (i) the development of a robust preprocessing module utilizing the Canny algorithm for edge detection and the circle-based Hough transform for precise iris extraction, and (ii) the implementation of Binary Statistical Image Features (BSIF) with domain-specific filters trained on iris-specific data for improved biometric identification. By combining these advanced image preprocessing techniques, the proposed method addresses key challenges in iris recognition, such as occlusions, varying pigmentation, and textural diversity. Experimental results on the Human-inspired Domain-specific Binarized Image Features (HDBIF) Dataset, consisting of 1892 iris images, confirm the significant enhancements achieved. Moreover, this paper offers a comprehensive and reproducible research framework by providing source codes and access to the testing database through the Notre Dame University dataset website, thereby facilitating further application and study. Future research will focus on exploring adaptive algorithms and integrating machine learning techniques to improve performance across diverse and unpredictable real-world scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Random Projection-Based Cancelable Iris Biometrics for Human Identification Using Deep Learning.
- Author
-
Rani, Rajneesh, Dhir, Renu, and Sonkar, Kirti
- Subjects
DEEP learning ,BIOMETRIC identification ,CONVOLUTIONAL neural networks ,IRIS recognition ,LITERATURE reviews ,FEATURE extraction - Abstract
Cancelable biometrics serves as an effective countermeasure against various template attacks launched by intruders, safeguarding the biometric system. This paper proposes a cancelable approach with a novel feature extraction technique for iris recognition, known as the hybrid architecture of the convolutional neural network (CNN) and GRU (gated recurrent unit). To provide cancelability to the system, the paper makes use of a random projection technique. The proposed method has the best outcome in terms of accurate identification. The method is validated on two Iris datasets IITD and MMU, which show promising results on the equal error rate (EER) and accuracy. The proposed model provides 0.02 and 0.045 EER for IITD and MMU, respectively, and accuracy 0.98 and 0.933%, for IITD and MMU Iris dataset, respectively, which is very high compared to other methodologies. The proposed hybrid architecture is being used for a cancelable biometric system for the first time based on literature review. The efficiency of the proposed method is high when validated on the datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. A Novel Multimodal Biometric Authentication System based on Fusion of Face, Finger Knuckle and Iris Traits.
- Author
-
Neware, Shubhangi, Jain, Siddhant, Singh, Suhani, Badri, Ummesalma, Jain, Yatharth, and Jadhao, Ayush
- Subjects
BIOMETRIC identification ,HUMAN fingerprints ,RELIABILITY in engineering ,FEATURE extraction ,DECISION trees ,BORDER security - Abstract
Multimodal Biometric Authentication, which leverages multiple biometric traits for verifying identity of users, holds growing significance in current society, owing to its inherent benefits in bolstering security, providing convenience, and ensuring accuracy across a multitude of applications including border control and smartphone unlocking. This system addresses the limitations of single-modal biometric systems such as lack of reliability and accuracy. In this paper, we present the integration of three distinct modalities - face, finger knuckle and iris to form our multimodal system as these modalities make the system contactless, improving the user experience and convenience. Preprocessing techniques and feature extraction are applied on the datasets and the model is generated to leverage the unique strength of each modality . Decision Tree is used for the fusion to combine these scores and generate the final authentication decision. To calculate the effectiveness of the proposed system, final model is tested and the results highlight the superiority of the multimodal system in comparison to individual modalities. The practical implications can be significant as the multimodal system provides increased security and accuracy, especially in scenarios where single modalities might be susceptible to environmental noise or spoofing attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
5. ARF-Net: a multi-modal aesthetic attention-based fusion.
- Author
-
Iffath, Fariha and Gavrilova, Marina
- Subjects
BIOMETRIC identification ,AESTHETICS ,FEATURE extraction ,IDENTIFICATION ,SYSTEM identification ,SOCIAL media - Abstract
Over the last decade, Online Social Media platforms have witnessed a dramatic expansion due to the substantial reliance of individuals on these communication channels. These platforms are widely utilized to convey emotions, share opinions, and express preferences through various means such as artworks, multimedia contents, and blogs. Researchers are exploring these individual-specific traits for biometric identification. Aesthetic biometric systems utilize users' unique preferences across various subjective forms such as images, music, and textual contents. This study introduces a novel multi-modal aesthetic system, with a primary contribution to the development of an attention-based fusion method for person identification. The proposed identification system leverages a deep pre-trained model for high-level feature extraction from visual and auditory modalities. The paper introduces a novel fusion architecture named attention-based residual fusion network (ARF-Net) to incorporate two heterogeneous aesthetic feature vectors. The proposed model yielded a 99.38% identification accuracy on the Aesthetic Image Audio 32 (AIA32) dataset and 98.02% identification accuracy on Aesthetic Image Audio 52 (AIA52) dataset, outperforming other aesthetic biometric systems. The proposed architecture stands out for its efficiency, showcasing a lightweight architecture with minimal parameters, ensuring optimal performance in different modalities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Biometric Identification Advances: Unimodal to Multimodal Fusion of Face, Palm, and Iris Features.
- Author
-
KADHIM, Ola Najah and ABDULAMEER, Mohammed Hasan
- Subjects
MULTIMODAL user interfaces ,BIOMETRIC identification ,PALMS ,MACHINE learning ,CONVOLUTIONAL neural networks ,DEEP learning ,INFORMATION technology security - Abstract
Due to increased information security concerns, biometric recognition technology has become more important. Unimodal biometrics still work effectively, but they struggle with noise sensitivity and spoof attack susceptibility since they rely on a single data source. This paper uses advances in deep learning and machine learning to propose new unimodal systems for the palm, face, and iris. These models use deep wavelet transform networks (WTN) for face and iris identification and deep convolutional neural networks (CNNs) for palmprint identification. In addition, we introduce a novel multimodal biometric system based on unimodal systems. We get 98.29% for face, 98.86% for palmprint, and 95.59% for iris in individual unimodal systems with Support Vector Machines (SVM). This is done by using the new property MULB dataset, which has many biometric features. The multimodal system achieves 99.88% accuracy and a 0.0186 equal error rate, underscoring the relevance of several biometric features and the superior performance of the identification system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Speaker recognition system using different feature extraction techniques using autoencoder.
- Author
-
Niwatkar, Arundhati, Kanse, Yuvraj, and Pandey, Akhilesh Kumar
- Subjects
- *
FEATURE extraction , *EXTRACTION techniques , *WAVELET transforms , *PROSODIC analysis (Linguistics) , *BIOMETRIC identification - Abstract
A speaker recognition system is a technology designed to identify and verify the identity of an individual based on their unique voice characteristics. It falls under the broader category of biometric authentication systems, which use various physical and behavioral traits to identify or authenticate individuals. Feature extraction plays a crucial role in a speaker recognition system as it is the process of converting raw speech signals into a compact and representative set of features that can effectively capture the unique characteristics of a person's voice. This step is vital because raw speech signals are complex and high-dimensional, containing a vast amount of redundant and irrelevant information. By extracting relevant features, the system can focus on the essential aspects of the speaker's voice, enhancing accuracy and efficiency. Effective feature extraction is essential for dealing with variations in speech due to different accents, speaking styles, or emotional states. By capturing the distinctive aspects of a person's voice, regardless of these variations, the system can achieve robust and reliable performance. Additionally, feature extraction significantly reduces the computational complexity of the recognition process, making it feasible for real-time applications. In this paper, the speaker recognition system utilizes an autoencoder for modeling purposes. Additionally, the study explores various feature extraction techniques to enhance the system's performance. These techniques include MFCC (Mel-Frequency Cepstral Coefficients), Pitch, Jitter, Wavelet Transform, Wavelet Packet Transform, and Shimmer feature extraction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Enhanced user verification in IoT applications: a fusion-based multimodal cancelable biometric system with ECG and PPG signals.
- Author
-
Siam, Ali I., El-Shafai, Walid, Abou Elazm, Lamiaa A., El-Bahnasawy, Nirmeen A., Abd El-Samie, Fathi E., Abou Elazm, Atef, and El-Banby, Ghada M.
- Subjects
- *
MULTIMODAL user interfaces , *PULSE wave analysis , *BIOMETRIC identification , *BIOMETRY , *INTERNET of things , *FEATURE extraction - Abstract
The core premise of cancelable biometrics lies in the creation of a distinct biometric template for every individual, which can be either canceled or regenerated as needed. This process requires the use of a uniquely-defined key during the generation of such template. The generated templates are tailored to be key-specific. This ensures that each distinct key will generate a unique template, while preserving the integrity and security of the original biometric data, ensuring that it remains uncompromised. In this paper, a cancelable biometric system based on electrocardiography (ECG) and photoplethysmography (PPG) signals is introduced. A signal fusion process is implemented for the two traits to generate a single template per user. In order to enhance the security of generated templates, a well-designed permutation stage is implemented according to a user-specific key. The permutation key is obtained through a well-designed look-up table created by the authors. The user verification is conducted on the cancelable template, without the need for any inversion processes. The user verification scheme depends on a two-pronged approach: robust feature extraction followed by the application of a machine learning (ML) classifier. The mel-frequency cepstral coefficients (MFCCs) extraction algorithm is employed for feature extraction due to the low frequency range of the adopted biometric signals and the nonlinearity of the filter bank used for MFCC extraction. Several ML classifiers are adopted to validate the system with cancelable templates without any inversion process. Simulation results with multilayer perceptron (MLP) and logistic regression (LR) classifiers demonstrated superior effectiveness of the proposed authentication framework, with accuracy rates up to 100% and 99.7% on the pulse transit time PPG and BIDMC datasets, respectively. Hence, the proposed system proves effective access control and user verification in the Internet-of-Things (IoT) applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Face, Fingerprint, and Signature based Multimodal Biometric System using Score Level and Decision Level Fusion Approaches.
- Author
-
Kazi, Majharoddin, Kale, Karbhari, Mehsen, Raddam Sami, Mane, Arjun, Humbe, Vikas, Rode, Yogesh, Dabhade, Siddharth, Bansod, Nagsen, Razvi, Arshad, and Deshmukh, Prapti
- Subjects
- *
PEARSON correlation (Statistics) , *RANK correlation (Statistics) , *EUCLIDEAN distance , *HUMAN fingerprints , *FEATURE extraction , *BIOMETRIC identification , *HAMMING distance - Abstract
Principal Component Analysis (PCA) is the best face recognition method. This research suggests PCA for fingerprint and signature recognition. Simple image processing transforms like DCT, 2D-DCT, DWT, SWT, 2D-SWT, SVD (Singular Vector Decomposition), Entropy, and Rank can be used for feature extraction. These transforms and measures are utilized with PCA as a feature extraction module to construct uni-modal and multimodal biometric systems using face, fingerprint, and signature modalities. Most PCA biometrics systems compare stored template to claimed identification using Euclidean distance. This paper proposes matching modules using similarity and dissimilarity measures viz. Absolute Pearson's Correlation Coefficient (APCC), Absolute Uncentered Pearson's Correlation Coefficient (AUPCC), Bray Curtis Distance (BC), Canberra distance (CB), Chebyshev Distance (CBS), Chessboard Distance (CSB), City block or Manhattan distance (CTB), Cross Correlation (CC), Dot product (DP), Euclidean distance (EUC), Extended Jaccard Distance (EJ), Hamming Distance (HM), Harmonically Summed Euclidean distance (HSEUC), Kendall Correlation Coefficient (KCC), Mahalanobis Distance (MH), Minimum Coordinate Difference (MCD), Minkowiski distance (MNK), Multivariate Kurtosis Coefficient (MVK), Multivariate Skew (MVS), Normalized City Block or Manhattan distance (NCTB), Normalized Cross-correlation (NCC), Normalized Euclidean distance (NEUC), Pearson's Cosine Distance (PCOS), Pearson's Correlation Coefficient (PCC), Pearson's Absolute Value Dissimilarity (PAVD), Pearson's Linear Dissimilarity (PLDISS), Spearman Correlation Coefficient (SCC), Standardized Euclidean Distance (SEUC), Uncentered Pearson's Correlation Coefficient (UPCC), Wave-Hedges Distance (WVH). This study again discusses score level fusion of face, fingerprint, and signature using sum and max rules, z-score normalization, and decision level fusion using AND rule. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. New Face Recognition System Based on DCT Pyramid and Backpropagation Neural Network.
- Author
-
Alane, Badreddine, Terchi, Younes, and Bougueze, Saad
- Subjects
HUMAN facial recognition software ,DISCRETE cosine transforms ,PYRAMIDS ,FEATURE extraction ,BIOMETRIC identification ,HUMAN-computer interaction - Abstract
Face recognition has emerged as a prominent biometric identification technique with applications ranging from security to human-computer interaction. This paper proposes a new face recognition system by appropriately combining techniques for improved accuracy. Specifically, it incorporates a discrete cosine transform (DCT) pyramid for feature extraction, statistical measures for dimensionality reduction of the features, and a two-layer backpropagation neural network for classification. The DCT pyramid is used to effectively capture both low- and high-frequency information from face images to improve the ability of the system to recognise faces accurately. Meanwhile, the introduction of statistical measures for dimensionality reduction helps in decreasing the computational complexity and provides better discrimination, leading to more efficient processing. Moreover, the two-layer neural network introduced, which plays a vital role in efficiently handling complex patterns, further enhances the recognition capabilities of the system. As a result of these advancements, the system achieves an outstanding 99% recognition rate on the Olivetti Research Laboratory (ORL) data set, 98.88 % on YALE, and 99.16% on AR. This performance demonstrates the robustness and potential of the proposed system for real-world applications in face recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. User Identification and Verification based on Auditory Evoked Potentials Using CNN.
- Author
-
Ghalami, Vida, Rezaii, Tohid Yousefi, Tinati, Mohammad Ali, Farzamnia, Ali, Khalili, Azam, Rastegarnia, Amir, and Moung, Ervin Gubin
- Subjects
- *
ACOUSTIC stimulation , *AUDITORY perception , *BIOMETRIC identification , *FEATURE extraction , *CONVOLUTIONAL neural networks - Abstract
In recent years, researchers have focused on the biometric applications of bioelectrical signals, particularly electroencephalograms (EEG), to enhance information security. Using EEG as a biometric offers advantages that cannot be forgotten or forged. One approach to utilizing EEG signals for biometric purposes involves recording auditory evoked potentials (AEP). AEPs are electrical potentials that arise in response to auditory stimulation in the cerebral cortex. These signals are stimulus-dependent and can vary with the auditory stimulus, allowing these signals to be employed even if the registered signal was compromised. In this paper, discriminative features are extracted and classified using convolutional neural networks. A dataset recorded from 20 users using auditory stimulation is analyzed. The reported results demonstrate a classification accuracy of 98.99% in identification mode and an equal error rate of 1.18% in verification mode. These outcomes showcase the proposed method’s high accuracy, marking an improvement over existing methods. Furthermore, the system’s practicality is enhanced by utilizing fewer channels, and its performance is assessed by reducing the number of channels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Enhancing internet of things security using entropy-informed RF-DNA fingerprint learning from Gabor-based images.
- Author
-
Taha, Mohamed A., Fadul, Mohamed M. K., Tyler, Joshua H., Reising, Donald R., and Loveless, T. Daniel
- Subjects
PHYSICAL layer security ,INTERNET security ,DEEP learning ,ACCESS control ,INTERNET of things ,BIOMETRIC identification - Abstract
Internet of Things (IoT) deployments are anticipated to reach 29.42 billion by the end of 2030 at an average growth rate of 16% over the next 6 years. These deployments represent an overall growth of 201.4% in operational IoT devices from 2020 to 2030. This growth is alarming because IoT devices have permeated all aspects of our daily lives, and most lack adequate security. IoT-connected systems and infrastructures can be secured using device identification and authentication, two effective identity-based access control mechanisms. Physical Layer Security (PLS) is an alternative or augmentation to cryptographic and other higher-layer security schemes often used for device identification and authentication. PLS does not compromise spectral and energy efficiency or reduce throughput. Specific Emitter Identification (SEI) is a PLS scheme capable of uniquely identifying senders by passively learning emitter-specific features unintentionally imparted on the signals during their formation and transmission by the sender's radio frequency (RF) front end. This work focuses on image-based SEI because it produces deep learning (DL) models that are less sensitive to external factors and better generalize to different operating conditions. More specifically, this work focuses on reducing the computational cost and memory requirements of image-based SEI with little to no reduction in performance by selecting the most informative portions of each image using entropy. These image portions or tiles reduce memory storage requirements by 92.8% and the DL training time by 81% while achieving an average percent correct classification performance of 91% and higher for SNR values of 15 dB and higher with individual emitter performance no lower than 87.7% at the same SNR. Compared with another state-of-the-art time-frequency (TF)-based SEI approach, our approach results in superior performance for all investigated signal-to-noise ratio conditions, the largest improvement being 21.7% at 9 dB and requires 43% less data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Authentication of multiple transaction using enhanced Elman spike neural network optimized with glowworm swarm optimization.
- Author
-
Joans, S. Mary, Jasmine, J. S. Leena, and Ponsudha, P.
- Subjects
- *
FEATURE extraction , *BIOMETRIC identification , *DEEP learning , *HUMAN fingerprints , *WAVELET transforms , *CREDIT cards , *FINGER joint - Abstract
Secure user authentication has grown importance in today’s modern culture. It is significant to authenticate the user identity in numerous consumer applications particularly financial transactions. Traditional authentication methods rely on easy-to-guess passwords, PIN numbers, or tokens with several security flaws, such as those printed on the back of credit cards for PIN numbers. As an alternative to current systems, biometric authentication techniques based on physical and behavioral characteristics have been proposed. Multibiometric systems, which combine several biometrics, are developed as a result of the difficulties that single-biometric authentication systems encountered in real-world applications including lack of precision and noisy data. The proposed system provides better performance and greater accuracy compared with other authentication techniques. The majority of them is inconvenient and demand complicated user interactions. This paper proposes Enhanced Elman Spike Neural Network along Glowworm Swarm Optimization (EESNN-GSO-AMT) for Multiple Transaction Authentication. The images are collected via SDUMLA-HMTalong CASIA V5 dataset. The pictures are provided to pre-processing to enhance the images quality utilizing Learnable Edge Collaborative Filter (LECF). The preprocessed images are fed to feature extraction using Adaptive and concise empirical wavelet transform (ACEWT) and the features are extracted such as entropy, homogeneity, energy and contrast. The extracting features are provided to EESNN classifier to categorize authorized or unauthorized persons. In general, the EESNN classifier does not express adapting optimization methods to determine ideal parameters to ensure accurately. Therefore, it is proposed to utilize the Glowworm Swarm Optimization to enhanceEESNN, which accurately categorizes the authorized and unauthorized person. The efficiency of the proposed approach is assessed usingsome metrics. The proposed EESNN-GSO-AMT method attains higher accuracy 20.54%, 21.76% and 23.89%; greater sensitivity 20.12% 20.34% and 21.43%; higher precision 23.34%, 22.68% and 24.34% are analyzed to the existing methods, like Optimal feature level fusion for safe human authentication in multimodal biometric scheme (OptGWO-AMT-FV), Joint attention network for finger vein authentication (JAnet-AMT-FV), Finger Vein Recognition Utilizing Deep Learning Technique (DCNN-AMT-FV) respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Multimodal biometric identification based on overlapped fingerprints, palm prints, and finger knuckles using BM-KMA and CS-RBFNN techniques in forensic applications.
- Author
-
Johnson, Jyothi and Chitra, R.
- Subjects
BIOMETRIC identification ,HUMAN fingerprints ,FEATURE extraction ,RADIAL basis functions ,K-means clustering ,BROWNIAN motion - Abstract
In several scenarios like forensic and civilian applications, biometric has emerged as a powerful technology for person authentication. Information extracted from different biometric traits is combined by the Multimodal Biometric (MB) solutions, hence showing a high resilience against presentation attacks. Additionally, they offer enhanced biometric performance and increased population coverage that is required for executing larger-scale recognition. By employing Brownian Motion enabled K-Means Algorithm (BM-KMA) and Cosine Swish activation-based Radial Basis Function Neural Network (RBFNN) (CS-RBFNN) methodologies, an MB authentication system centered on overlapped Fingerprints (FPs), Palm Prints (PPs), and finger knuckles (FKs) is proposed here. Primarily, from the publically available datasets, the overlapped FP images and hand images are taken. Next, to separate the PPs and FKs, the Region of Interest (ROI) is estimated for the hand image. Then, pre-processing, feature extraction, and feature reduction are carried out. From the overlapped FP, the noises are removed using BF; after that, the FP's contrast is enriched using SMF-CLAHE for improving the clarity of the minutiae structure of the ridges. Following this, normalization is performed using the Min–Max operation. Minute features are extracted by separating the overlapped FP using BM-KMA, which makes the system from avoidance of system complexity by separating the overlapping. From this, interest features are selected using KRC-PCA. Next, feature fusion is conducted. Finally, CS-RBFNN is wielded to categorize genuine biometrics from imposter ones. Via performance metrics, the proposed system is further affirmed. The outcomes exhibited that the proposed technique surpasses the other prevailing methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Privacy-preserving face recognition method based on extensible feature extraction.
- Author
-
Hu, Weitong, Zhou, Di, Zhu, Zhenxin, Qiao, Tong, Yao, Ye, and Hassaballah, Mahmoud
- Subjects
- *
HUMAN facial recognition software , *IMAGE recognition (Computer vision) , *SMARTPHONES , *FEATURE extraction , *BIOMETRIC identification , *DATA security , *IMAGE encryption - Abstract
Face recognition (FR) technique has become a pervasive and ubiquitous part of daily lives, from unlocking our smartphones with a glance to being scanned by surveillance cameras in various outdoor locations. When people's face photos are uploaded to the cloud for face recognition processing, they often have legitimate concerns about the privacy and security of their biometric data. A number of privacy-preserving face recognition (PPFR) frameworks have been proposed to address these issues by enabling the cloud to perform face recognition without revealing the identity or features of the face photos. However, these frameworks suffer from several limitations. They rely on computationally intensive operations that increase the cost and time of face recognition, leading to less applications in the real-world scenario. Many current frameworks support only one face recognition method and cannot be extended to different models. To overcome these challenges, in this paper, we propose a PPFR framework with high recognition accuracy based on extensible feature extraction for different application scenarios. In particular, features are extracted by a selective model, such as MobileFaceNet, ResNet-18 or ResNet-50, and encrypted by a randomness-based encryption algorithm in both face owner and user. Cloud service provider (SP) performs face recognition by comparing the Euclidean distances between features received from the above two entities. Extensive experiments verify that the proposed framework has significant advantages in terms of accuracy and efficiency. • In the proposed framework of PPFR, the accuracy of face recognition remains as high as we expect regardless of in the plaintext or encrypted domain. • The feature extraction component can be customized to meet specific requirements by selecting different baseline models, such as MobileFaceNet, ResNet-18, and ResNet-50. • The proposed PPFR solution outperforms the baseline frameworks in terms of data transmission volume, computational efficiency, and recognition accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A new face presentation attack detection method based on face-weighted multi-color multi-level texture features.
- Author
-
Turhal, Uğur, Günay Yılmaz, Asuman, and Nabiyev, Vasif
- Subjects
COLOR space ,FEATURE extraction ,HUMAN fingerprints ,HUMAN facial recognition software ,TECHNOLOGICAL innovations ,BIOMETRIC identification - Abstract
Biometric data (facial, voice, fingerprint, and retinal scans, for example) are widely used in identification due to their unique and irreversible nature. Facial recognition technologies are employed in a wide range of applications due to their contactless nature and convenience. However, technological advancements and the availability of access to personal information have rendered these biometric systems susceptible to attacks utilizing fake faces. As a result, the issue of anti-spoofing has emerged as a critical one in the field of facial recognition. This study proposes a joint face presentation attack (FPA) detection method based on face-weighted multi-color multi-level LBP features extracted from the combination of device-dependent HSV and device-independent L*a*b* color spaces. The facial images were converted to HSV and L*a*b* color spaces. Three levels of regional LBP features were extracted from each color channel and then concatenated. Finally, a Multi-Color Multi-Level LBP (MCML_LBP) feature vector was obtained. In addition, the Face Weighted MCML_LBP feature vector was produced (FW_MCML_LBP) by adding the LBP histogram extracted from the central region of the normalized image. The feature vectors are used to train an SVM classifier after reducing their size using PCA. Twenty-five different test scenarios were subjected to experimentation on the CASIA and Replay-Attack databases. 2.11% EER and 0.19% HTER were achieved on CASIA (Overall) and Replay-Attack (Grandtest) databases, respectively, using the L*a*b color space and the proposed feature extraction method. The results of the study showed that the proposed method was successful in FPA detection compared to the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Feature extraction and learning approaches for cancellable biometrics: A survey.
- Author
-
Yang, Wencheng, Wang, Song, Hu, Jiankun, Tao, Xiaohui, and Li, Yan
- Subjects
BIOMETRIC identification ,FEATURE extraction ,BIOMETRY ,COMPUTER vision ,DATA privacy ,DEEP learning ,HUMAN fingerprints ,RESEARCH personnel - Abstract
Biometric recognition is a widely used technology for user authentication. In the application of this technology, biometric security and recognition accuracy are two important issues that should be considered. In terms of biometric security, cancellable biometrics is an effective technique for protecting biometric data. Regarding recognition accuracy, feature representation plays a significant role in the performance and reliability of cancellable biometric systems. How to design good feature representations for cancellable biometrics is a challenging topic that has attracted a great deal of attention from the computer vision community, especially from researchers of cancellable biometrics. Feature extraction and learning in cancellable biometrics is to find suitable feature representations with a view to achieving satisfactory recognition performance, while the privacy of biometric data is protected. This survey informs the progress, trend and challenges of feature extraction and learning for cancellable biometrics, thus shedding light on the latest developments and future research of this area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Data from University of Science and Technology Beijing Update Knowledge in Information and Data Preprocessing (Digital Dental Biometrics for Human Identification Based on Automated 3D Point Cloud Feature Extraction and Registration).
- Subjects
INFORMATION technology ,FEATURE extraction ,DENTAL crowns ,BIOMETRIC identification ,POINT cloud - Abstract
A recent report from the University of Science and Technology Beijing explores the use of intraoral scans (IOS) in human identification. The researchers propose a dental biometrics framework that utilizes 3D dental point clouds and machine learning algorithms for identification. The framework consists of three stages: data preprocessing, feature extraction, and registration-based identification. Experimental results demonstrate that the method achieves a high recognition rate, even in cases of partial tooth loss. This research provides valuable insights into the potential application of digital dental biometrics in human identification. [Extracted from the article]
- Published
- 2024
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.