Back to Search Start Over

Vision Based Segmentation and Classification of Cracks Using Deep Neural Networks.

Authors :
Reghukumar, Arathi
Anbarasi, L. Jani
Prassanna, J.
Manikandan, R.
Al-Turjman, Fadi
Source :
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems. 2021 Supplement 1, Vol. 29, p141-156. 16p.
Publication Year :
2021

Abstract

Deep learning artificial intelligence (AI) is a booming area in the research field. It allows the development of end-to-end models to predict outcomes based on input data without the need for manual extraction of features. This paper aims for evaluating the automatic crack detection process that is used in identifying the cracks in building structures such as bridges, foundations or other large structures using images. A hybrid approach involving image processing and deep learning algorithms is proposed to detect automatic cracks in structures. As cracks are detected in the images they are segmented using a segmentation process. The proposed deep learning models include a hybrid architecture combining Mask R-CNN with single layer CNN, 3-layer CNN, and8-layer CNN. These models utilizes depth wise convolution with varying dilation rates for efficiently extracting diversified features from the crack images. Further, performance evaluation shows that Mask R-CNN with a single layer CNN achieves an accuracy of 97.5% on a normal dataset and 97.8% on a segmented dataset. The Mask R-CNN with 2-layer convolution resulted in an accuracy of 98.32% on a normal dataset and 98.39% on a segmented dataset. The Mask R-CNN with 8-layers convolution achieves an accuracy of 98.4% on a normal dataset and 98.75% on a segmented dataset. The proposed Mask R-CNN have proved its feasibility in detecting cracks in huge building and structures. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02184885
Volume :
29
Database :
Academic Search Index
Journal :
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems
Publication Type :
Academic Journal
Accession number :
149577045
Full Text :
https://doi.org/10.1142/S0218488521400080