623 results on '"googlenet"'
Search Results
102. Recognition of Handwritten Gujarati Conjuncts Using the Convolutional Neural Network Architectures: AlexNet, GoogLeNet, Inception V3, and ResNet50
- Author
-
Parikh, Megha, Desai, Apurva, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Singh, Mayank, editor, Tyagi, Vipin, editor, Gupta, P. K., editor, Flusser, Jan, editor, and Ören, Tuncer, editor
- Published
- 2022
- Full Text
- View/download PDF
103. Deep Learning Models for Tomato Plant Disease Detection
- Author
-
Kathole, Vishakha, Munot, Mousami, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Gupta, Deepak, editor, Sambyo, Koj, editor, Prasad, Mukesh, editor, and Agarwal, Sonali, editor
- Published
- 2022
- Full Text
- View/download PDF
104. Optimized Linguistic Feature Selection Through Deep Learning Approach for Forensic Applications
- Author
-
Sajini, G., Kallimani, Jagadish S., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Saini, H. S., editor, Sayal, Rishi, editor, Govardhan, A., editor, and Buyya, Rajkumar, editor
- Published
- 2022
- Full Text
- View/download PDF
105. Rice Crop Diseases and Pest Detection Using Edge Detection Techniques and Convolution Neural Network
- Author
-
Muruganandam, Preethi, Tandon, Varun, Baranidharan, B., Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Engelbrecht, Andries, editor, and Shukla, Praveen Kumar, editor
- Published
- 2022
- Full Text
- View/download PDF
106. Automatic Classification of Medicinal Plants of Leaf Images Based on Convolutional Neural Network
- Author
-
Berihu, Mengisti, Fang, Juan, Lu, Shuaibing, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Liao, Xiangke, editor, Zhao, Wei, editor, Chen, Enhong, editor, Xiao, Nong, editor, Wang, Li, editor, Gao, Yang, editor, Shi, Yinghuan, editor, Wang, Changdong, editor, and Huang, Dan, editor
- Published
- 2022
- Full Text
- View/download PDF
107. Malaria Detection Using Convolutional Neural Network
- Author
-
Almezhghwi, Khaled, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Aliev, Rafik A., editor, Jamshidi, Mo, editor, Babanli, Mustafa, editor, and Sadikoglu, Fahreddin M., editor
- Published
- 2022
- Full Text
- View/download PDF
108. A Comparative Study of Deep Learning Algorithms for Identification of COVID-19 Disease Using Chest X-Ray Images
- Author
-
Hammadah, Nour Haj, Das, Nilima R., Nayak, Mamata, Swarnkar, Tripti, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Mishra, Manohar, editor, Sharma, Renu, editor, Kumar Rathore, Akshay, editor, Nayak, Janmenjoy, editor, and Naik, Bighnaraj, editor
- Published
- 2022
- Full Text
- View/download PDF
109. Utilization of Computer Vision Technique for Automated Crack Detection Based on UAV-Taken Images
- Author
-
Mirzazade, Ali, Nodeh, Maryam Pahlavan, Popescu, Cosmin, Blanksvärd, Thomas, Täljsten, Björn, di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Pellegrino, Carlo, editor, Faleschini, Flora, editor, Zanini, Mariano Angelo, editor, Matos, José C., editor, Casas, Joan R., editor, and Strauss, Alfred, editor
- Published
- 2022
- Full Text
- View/download PDF
110. Application of Deep Convolutional Neural Networks VGG-16 and GoogLeNet for Level Diabetic Retinopathy Detection
- Author
-
Suedumrong, Chaichana, Leksakul, Komgrit, Wattana, Pranprach, Chaopaisarn, Poti, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, and Arai, Kohei, editor
- Published
- 2022
- Full Text
- View/download PDF
111. Benchmarking Analysis of CNN Architectures for Artificial Intelligence Platforms
- Author
-
Jha, Nishi, Rawat, Pooja, Tiwari, Abhishek, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Noor, Arti, editor, Sen, Abhijit, editor, and Trivedi, Gaurav, editor
- Published
- 2022
- Full Text
- View/download PDF
112. CNN-Based Vehicle Classification Using Transfer Learning
- Author
-
Rajathi, G. M., Kovilpillai, J. Judeson Antony, Sankar, Harini, Divya, S., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Raj, Jennifer S., editor, Palanisamy, Ram, editor, Perikos, Isidoros, editor, and Shi, Yong, editor
- Published
- 2022
- Full Text
- View/download PDF
113. Musical Instrument Sound Classification Using GoogleNet with SVM and kNN Model
- Author
-
Prabavathy, S., Rathikarani, V., Dhanalakshmi, P., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Chen, Joy Iong-Zong, editor, Tavares, João Manuel R. S., editor, Iliyasu, Abdullah M., editor, and Du, Ke-Lin, editor
- Published
- 2022
- Full Text
- View/download PDF
114. Preliminary study of artificial intelligence-based fuel-rod pattern analysis of low-quality tomographic image of fuel assembly
- Author
-
Saerom Seong, Sehwan Choi, Jae Joon Ahn, Hyung-joo Choi, Yong Hyun Chung, Sei Hwan You, Yeon Soo Yeom, Hyun Joon Choi, and Chul Hee Min
- Subjects
Single-photon emission computed tomography ,Monte Carlo ,Nuclear fuel assembly ,Artificial intelligence ,VGG ,GoogLeNet ,Nuclear engineering. Atomic power ,TK9001-9401 - Abstract
Single-photon emission computed tomography is one of the reliable pin-by-pin verification techniques for spent-fuel assemblies. One of the challenges with this technique is to increase the total fuel assembly verification speed while maintaining high verification accuracy. The aim of the present study, therefore, was to develop an artificial intelligence (AI) algorithm-based tomographic image analysis technique for partial-defect verification of fuel assemblies. With the Monte Carlo (MC) simulation technique, a tomographic image dataset consisting of 511 fuel-rod patterns of a 3 × 3 fuel assembly was generated, and with these images, the VGG16, GoogLeNet, and ResNet models were trained. According to an evaluation of these models for different training dataset sizes, the ResNet model showed 100% pattern estimation accuracy. And, based on the different tomographic image qualities, all of the models showed almost 100% pattern estimation accuracy, even for low-quality images with unrecognizable fuel patterns. This study verified that an AI model can be effectively employed for accurate and fast partial-defect verification of fuel assemblies.
- Published
- 2022
- Full Text
- View/download PDF
115. Weather and surface condition detection based on road-side webcams: Application of pre-trained Convolutional Neural Network
- Author
-
Md Nasim Khan and Mohamed M. Ahmed
- Subjects
Weather detection ,Winter maintenance ,Roadside webcams ,Convolutional Neural Network ,GoogLeNet ,Transportation engineering ,TA1001-1280 - Abstract
Adverse weather has long been recognized as one of the major causes of motor vehicle crashes due to its negative impact on visibility and road surface. Providing drivers with real-time weather information is therefore extremely important to ensure safe driving in adverse weather. However, identification of road weather and surface conditions is a challenging task because it requires the deployment of expensive weather stations and often needs manual identification and/or verification. Most of the Department of Transportations (DOTs) in the U.S. have installed roadside webcams mostly for operational awareness. This study leveraged these easily accessible data sources to develop affordable automatic road weather and surface condition detection systems. The developed detection models are focused on three weather conditions; clear, light snow, and heavy snow; as well as three surface conditions: dry, snowy, wet/slushy. Several pre-trained Convolutional Neural Network (CNN) models, including AlexNet, GoogLeNet, and ResNet18, were applied with proper modification via transfer learning to achieve the classification tasks. The best performance was achieved using ResNet18 architecture with an unprecedented overall detection accuracy of 97% for weather detection and 99% for surface condition detection. The proposed study has the potential to provide more accurate and consistent weather information in real-time that can be made readily available to be used by road users and other transportation agencies. The proposed models could also be used to generate temporal and spatial variations of adverse weather for proper optimization of maintenance vehicles’ route and time.
- Published
- 2022
- Full Text
- View/download PDF
116. SpikeGoogle: Spiking Neural Networks with GoogLeNet‐like inception module
- Author
-
Xuan Wang, Minghong Zhong, Hoiyuen Cheng, Junjie Xie, Yingchu Zhou, Jun Ren, and Mengyuan Liu
- Subjects
GoogLeNet ,inception ,Spiking Neural Networks ,Computational linguistics. Natural language processing ,P98-98.5 ,Computer software ,QA76.75-76.765 - Abstract
Abstract Spiking Neural Network is known as the third‐generation artificial neural network whose development has great potential. With the help of Spike Layer Error Reassignment in Time for error back‐propagation, this work presents a new network called SpikeGoogle, which is implemented with GoogLeNet‐like inception module. In this inception module, different convolution kernels and max‐pooling layer are included to capture deep features across diverse scales. Experiment results on small NMNIST dataset verify the results of the authors’ proposed SpikeGoogle, which outperforms the previous Spiking Convolutional Neural Network method by a large margin.
- Published
- 2022
- Full Text
- View/download PDF
117. Cyber-Physical System Security Based on Human Activity Recognition through IoT Cloud Computing.
- Author
-
Achar, Sandesh, Faruqui, Nuruzzaman, Whaiduzzaman, Md, Awajan, Albara, and Alazab, Moutaz
- Subjects
HUMAN activity recognition ,RANSOMWARE ,CYBER physical systems ,SECURITY systems ,COMPUTER vision ,CLOUD computing - Abstract
Cyber-physical security is vital for protecting key computing infrastructure against cyber attacks. Individuals, corporations, and society can all suffer considerable digital asset losses due to cyber attacks, including data loss, theft, financial loss, reputation harm, company interruption, infrastructure damage, ransomware attacks, and espionage. A cyber-physical attack harms both digital and physical assets. Cyber-physical system security is more challenging than software-level cyber security because it requires physical inspection and monitoring. This paper proposes an innovative and effective algorithm to strengthen cyber-physical security (CPS) with minimal human intervention. It is an approach based on human activity recognition (HAR), where GoogleNet–BiLSTM network hybridization has been used to recognize suspicious activities in the cyber-physical infrastructure perimeter. The proposed HAR-CPS algorithm classifies suspicious activities from real-time video surveillance with an average accuracy of 73.15%. It incorporates machine vision at the IoT edge (Mez) technology to make the system latency tolerant. Dual-layer security has been ensured by operating the proposed algorithm and the GoogleNet–BiLSTM hybrid network from a cloud server, which ensures the security of the proposed security system. The innovative optimization scheme makes it possible to strengthen cyber-physical security at only USD 4.29 ± 0.29 per month. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
118. DEVELOPING THE GOOGLENET NEURAL NETWORK FOR THE DETECTION AND RECOGNITION OF UNMANNED AERIAL VEHICLES IN THE DATA FUSION SYSTEM.
- Author
-
Semenyuk, Vladislav, Kurmashev, Ildar, Lupidi, Alberto, and Cantelli-Forti, Alessandro
- Subjects
DRONE aircraft ,RECOGNITION (Psychology) ,TECHNOLOGY transfer ,MULTISENSOR data fusion ,KALMAN filtering ,OPTOELECTRONIC devices ,DATABASES - Abstract
This work reports a study into the possibility of using the GoogleNet neural network in the optoelectronic channel of the Data Fusion system. The search for the most accurate algorithms for detecting and recognizing unmanned aerial vehicles (UAVs) in Data Fusion systems has been carried out. The data processing scheme was selected (merging SVF state vectors and merging MF measurements), as well as the sensors and recognition models on each channel of the system. The Data Fusion model based on the Kalman Filter was chosen, integrating radar and optoelectronic channels. Mini-radars LPI-FMCW were used as a radar channel. Evaluation of the effectiveness of the selected Data Fusion channel model in UAV detection is based on the recognition accuracy. The main study is aimed at determining the possibility of using the GoogleNet neural network in the optoelectronic channel for UAV recognition under conditions of different range classes. The neural network for the recognition of drones was developed using transfer training technology. For training, validation, and testing of the GoogleNet neural network, a database has been built, and a special application has been developed in the MATLAB environment. The capabilities of the developed neural network were studied for 5 variants of the distance to the object. The detection objects were the Inspire 2, DJI Phantom 4 Pro, DJI F450, DU 1911 UAVs, not included in the training database. The UAV recognition accuracy by the neural network was 98.13 % at a distance of up to 5 m, 94.65 % at a distance of up to 20 m, 92.47 % at a distance of up to 20 m, 90.28 % at a distance of up to 100 m, and 88.76 % at a distance of up to 200 m. The average speed of UAV recognition by this method was 0.81 s. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
119. Face Mask Detection Using GoogLeNet CNN-Based SVM Classifiers.
- Author
-
SUNNETCI, Kubilay Muhammed, AKBEN, Selahaddin Batuhan, KARA, Mevlude Merve, and ALKAN, Ahmet
- Subjects
- *
MEDICAL masks , *ARTIFICIAL intelligence , *SUPPORT vector machines , *MASK laws , *COVID-19 pandemic - Abstract
The COVID-19 pandemic that broke out in 2019 has affected the whole world, and in late 2021 the number of cases is still increasing rapidly. In addition, due to this pandemic, all people must follow the mask and cleaning rules. Herein, it is now mandatory to wear a mask in places where millions of people working in many workplaces work. Hence, artificial intelligence-based systems that can detect face masks are becoming very popular today. In this study, a system that can automatically detect whether people are masked or not is proposed. Here, we extract image features from each image using the GoogLeNet architecture. With the help of these image features, we train GoogLeNet based Linear Support Vector Machine (SVM), Quadratic SVM, and Coarse Gaussian SVM classifiers. The results show that the accuracy (%), sensitivity (%), specificity (%) precision (%), F1 score (%), and Matthews Correlation Coefficient (MCC) values of GoogLeNet based Linear SVM is equal to 99.55-99.55-99.55-99.55-99.55-0.9909. When the results of the proposed system are examined, it is seen that it provides an advantage due to its high accuracy. In addition, it is very useful in practice that it can detect masks from any camera. Moreover, since there are classification models that can be created in a shorter time than models that can detect objects, model results can be examined in a shorter time. Therefore, it is seen that the proposed system also provides an advantage in terms of complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
120. Enhanced Tyre Pressure Monitoring System for Nitrogen Filled Tyres Using Deep Learning.
- Author
-
Muturatnam, Arun Balaji, Sridharan, Naveen Venkatesh, Sreelatha, Anoop Prabhakaranpillai, and Vaithiyanathan, Sugumaran
- Subjects
DEEP learning ,TIRES ,RADIO transmitters & transmission ,PRESSURE sensors ,NOISE control ,ELECTRONIC equipment - Abstract
Tyre pressure monitoring systems (TPMS) are electronic devices that monitor tyre pressure in vehicles. Existing systems rely on wheel speed sensors or pressure sensors. They rely on batteries and radio transmitters, which add to the expense and complexity. There are two types of basic tyres: non-pneumatic and pneumatic tyres. Non-pneumatic tyres lack air and combine the tyre and wheel into a single unit. When it comes to noise reduction, durability, and shock absorption, pneumatic tyres are more valuable than non-pneumatic tyres. In this study, nitrogen-filled pneumatic tyres were considered due to the uniform pressure management property. Additionally, nitrogen has less of an effect on thermal expansion than regular air-filled tyres. This work aimed to offer a deep learning approach for TPMS. An accelerometer captured vertical vibrations from a moving vehicle's wheel hub, which were then converted in the form of vibration plots and categorized using pretrained networks. The most popular pretrained networks such as AlexNet, GoogLeNet, ResNet-50 and VGG-16 were employed in this study. From these pretrained networks, the best-performing pretrained network was determined and suggested for TPMS by varying the hyperparameters such as learning rate (LR), batch size (BS), train-test split ratio (TR), and solver (SR). Findings: A higher classification accuracy of 97.20% was obtained while using ResNet-50. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
121. Comparative Study of CNN Structures for Arabic Speech Recognition.
- Author
-
Talai, Zoubir, Kherici, Nada, and Bahi, Halima
- Subjects
DEEP learning ,CONVOLUTIONAL neural networks ,ARABIC speeches, addresses, etc. - Abstract
Speech recognition is an essential ability of human beings and is crucial for communication. Consequently, automatic speech recognition (ASR) is a major area of research that is increasingly using artificial intelligence techniques to replicate this human ability. Among these techniques, deep learning (DL) models attract much attention, in particular, convolutional neural networks (CNN) which are known due to their power to model spatial relationships. In this article, three CNN architectures that performed well in recognized competitions were implemented to compare their performance in Arabic speech recognition; these are the well-known models AlexNet, ResNet, and GoogLeNet. These models were compared based on a corpus composed of Arabic spoken digits collected from various sources, including messaging and social media applications, in addition to an online corpus. The architectures of AlexNet, ResNet, and GoogLeNet achieved respectively an accuracy of 86.19%, 83.46%, and 89.61%. The results show the superiority of GoogLeNet, and underline the potential of CNN architectures to model acoustic features of low-resource languages such as Arabic. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
122. Detection of Monkeypox Among Different Pox Diseases with Different Pre-Trained Deep Learning Models.
- Author
-
ÇELİK, Muhammed and İNİK, Özkan
- Subjects
- *
DEEP learning , *MONKEYPOX , *ARTIFICIAL intelligence , *VIRUS diseases , *COMPUTER vision , *VACCINIA - Abstract
Monkeypox is a viral disease that has recently rapidly spread. Experts have trouble diagnosing the disease because it is similar to other smallpox diseases. For this reason, researchers are working on artificial intelligence-based computer vision systems for the diagnosis of monkeypox to make it easier for experts, but a professional dataset has not yet been created. Instead, studies have been carried out on datasets obtained by collecting informal images from the Internet. The accuracy of state-of-the-art deep learning models on these datasets is unknown. Therefore, in this study, monkeypox disease was detected in cowpox, smallpox, and chickenpox diseases using the pre-trained deep learning models VGG-19, VGG-16, MobileNet V2, GoogLeNet, and EfficientNet-B0. In experimental studies on the original and augmented datasets, MobileNet V2 achieved the highest classification accuracy of 99.25% on the augmented dataset. In contrast, the VGG-19 model achieved the highest classification accuracy with 78.82% of the original data. Considering these results, the shallow model yielded better results for the datasets with fewer images. When the amount of data increased, the success of deep networks was better because the weights of the deep models were updated at the desired level. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
123. Histopathological Analysis for Detecting Lung and Colon Cancer Malignancies Using Hybrid Systems with Fused Features.
- Author
-
Al-Jabbar, Mohammed, Alshahrani, Mohammed, Senan, Ebrahim Mohammed, and Ahmed, Ibrahim Abdulrab
- Subjects
- *
LUNG cancer , *COLON cancer , *MEDICAL personnel , *HYBRID systems , *DELAYED diagnosis , *HISTOPATHOLOGY - Abstract
Lung and colon cancer are among humanity's most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking which causes lung cancer, leading to an abnormal diet, which also causes colon cancer. There are many techniques for diagnosing lung and colon cancer, most notably the biopsy technique and its analysis in laboratories. Due to the scarcity of health centers and medical staff, especially in developing countries. Moreover, manual diagnosis takes a long time and is subject to differing opinions of doctors. Thus, artificial intelligence techniques solve these challenges. In this study, three strategies were developed, each with two systems for early diagnosis of histological images of the LC25000 dataset. Histological images have been improved, and the contrast of affected areas has been increased. The GoogLeNet and VGG-19 models of all systems produced high dimensional features, so redundant and unnecessary features were removed to reduce high dimensionality and retain essential features by the PCA method. The first strategy for diagnosing the histological images of the LC25000 dataset by ANN uses crucial features of GoogLeNet and VGG-19 models separately. The second strategy uses ANN with the combined features of GoogLeNet and VGG-19. One system reduced dimensions and combined, while the other combined high features and then reduced high dimensions. The third strategy uses ANN with fusion features of CNN models (GoogLeNet and VGG-19) and handcrafted features. With the fusion features of VGG-19 and handcrafted features, the ANN reached a sensitivity of 99.85%, a precision of 100%, an accuracy of 99.64%, a specificity of 100%, and an AUC of 99.86%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
124. Sound Recognition Method of Coal Mine Gas and Coal Dust Explosion Based on GoogLeNet.
- Author
-
Yu, Xingchen and Li, Xiaowei
- Subjects
- *
COAL dust , *DUST explosions , *COAL mining , *COAL gas , *MOLECULAR recognition , *WAVELETS (Mathematics) , *GAS explosions - Abstract
To solve the problems of backward means of coal mine gas and coal dust explosion monitoring, late reporting, and low leakage rate, a sound recognition method of coal mine gas and coal dust explosion based on GoogLeNet was proposed. After installing mining pickups in key monitoring areas of coal mines to collect the sounds of the working equipment and the environment, the collected sound was analyzed by continuous wavelet to obtain its scale coefficient map. This was then imported into GoogLeNet to obtain the recognition model of coal mine gas and coal dust explosions. The test sound was obtained by continuous wavelet analysis to obtain the scale coefficient map, brought into the completed training recognition model to obtain the sound signal class, and verified by experiment. Firstly, the scale coefficient map extracted from the sound signal by continuous wavelet analysis showed that the similarity between the subjective and objective indicators of the wavelet coefficient maps of the gas explosion sound and coal dust explosion sound was higher, but the difference between these and the rest of the coal mine sounds was clearer, helping to effectively distinguish gas and coal dust explosion sounds from other sounds. Secondly, the experimental results of GoogLeNet parameters can be obtained. When the dropout parameter is 0.5 and the initial learning rate is 0.001, the recognition effect of the model established by GoogLeNet was optimal. According to the selected parameters, the training loss, testing loss, training recognition rate, and testing recognition rate of the model are all in line with expectations. Finally, the experimental recognition results show that the recognition rate of the proposed method is 97.38%, the recall rate is 86.1%, and the accuracy rate is 100% for the case of a 9:1 ratio of test data to training data, and the overall recognition effect of the proposed GoogLeNet is significantly better than that of vgg and Alexnet, which can effectively solve the problem of under-sampling of coal mine gas and coal dust explosion sounds and can meet the need for the intelligent recognition of coal mine gas and dust explosions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
125. Implementation of Multiple CNN Architectures to Classify the Sea Coral Images.
- Author
-
Nemer, Zainab N., Jasim, Wala'a N., and Harfash, Esra'a J.
- Subjects
CORALS ,IMAGE processing ,IMAGE recognition (Computer vision) ,PROBLEM solving - Abstract
Image processing and computer vision have a major role in addressing many problems, where images and techniques that are dealt with them contribute greatly to finding solutions to many topics and in different directions. Classification techniques have a large and important role in this field, through which it is possible to recognize and classify images in a way that helps in solving a specific problem. Among the most prominent models that are distinguished for their ability and accuracy in distinguishing is the CNN model. In this research, we have introduced a system to classify the sea coral images because sea coral and its classes have many benefits in many aspects of our lives. The important thing in this work is to study four CNN architectures model (i.e., AlexNet, SqueezeNet, GoogLeNet/ Inception-v1, google Inception-v3) to determine the accuracy and efficiency of these architectures and determine the best of them with coral image data, and we are shown the details in the research paragraphs. The results showed 83.33% accuracy for AlexNet, 80.85% SqueezeNet, 90.5% GoogLeNet and 93.17% for Inception-v3. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
126. Turbulence Aberration Restoration Based on Light Intensity Image Using GoogLeNet.
- Author
-
Ma, Huimin, Zhang, Weiwei, Ning, Xiaomei, Liu, Haiqiu, Zhang, Pengfei, and Zhang, Jinghui
- Subjects
CONVOLUTIONAL neural networks ,LIGHT intensity ,ATMOSPHERIC turbulence ,TURBULENCE ,ADAPTIVE optics ,BIG data - Abstract
Adaptive optics (AO) is an effective method to compensate the wavefront distortion caused by atmospheric turbulence and system distortion. The accuracy and speed of aberration restoration are important factors affecting the performance of adaptive optics correction. In recent years, an AO correction method based on a convolutional neural network (CNN) has been proposed for the non-iterative extraction of light intensity image features and recovery of phase information. This method can directly predict the Zernike coefficient of the wavefront from the measured light intensity image and effectively improve the real-time correction ability of the AO system. In this paper, a turbulence aberration restoration based on two frames of a light intensity image using GoogLeNet is established. Three depth scales of GoogLeNet and different amounts of data training are tested to verify the accuracy of Zernike phase difference restoration at different turbulence intensities. The results show that the training of small data sets easily overfits the data, while the training performance of large data sets is more stable and requires a deeper network, which is conducive to improving the accuracy of turbulence aberration restoration. The restoration effect of third-order to seventh-order aberrations is significant under different turbulence intensities. With the increase in the Zernike coefficient, the error increases gradually. However, there are valley points lower than the previous growth for the 10th-, 15th-, 16th-, 21st-, 28th- and 29th-order aberrations. For higher-order aberrations, the greater the turbulence intensity, the greater the restoration error. The research content of this paper can provide a network design reference for turbulence aberration restoration based on deep learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
127. An Effective Semantic Mathematical Model for Skin Cancer Classification Using a Saliency-based Level Set with Improved Boundary Indicator Function.
- Author
-
Aswathanarayana, Sukesh Hoskote and Kanipakapatnam, Sundeep Kumar
- Subjects
SKIN cancer ,TUMOR classification ,SUPPORT vector machines ,MATHEMATICAL models ,ERROR rates - Abstract
Skin cancer is one of the most commonly occurring cancer and it causes hundreds to thousands of yearly deaths worldwide. Early identification of skin cancer significantly increases the recovery chances from skin cancer. However, precise skin cancer classification is a challenging task because of the ineffective segmentation of skin cancer. In this paper, the saliency-based level set with an improved boundary indicator function (SLSIBIF) is proposed for the effective segmentation of skin cancer. An improved boundary indicator function is used in the segmentation to detect the skin cancer boundaries even under the constraints of low intensity and illumination. The features from the segmented images are extracted by using the GoogLeNet which uses sparse connections to extract an optimal feature. Further, the classification is done using a multi-class support vector machine (MSVM). The performances of the proposed SLSIBIF-MSVM are evaluated using accuracy, sensitivity, specificity, positive predictive value (PPV), error rate, jacard, and dice coefficient. The existing approaches such as deep-learning system (DLS), ResNet-50, K-means with grasshopper optimization algorithm (GOA) and Region-based CNN (RCNN) and Fuzzy K-means (FKM) are used to compare the SLSIBIF-MSVM. The classification accuracy of SLSIBIF-MSVM for ISIC-2017 dataset is 98.74%, which is high when compared to the DLS, ResNet-50, K means GOA and RCNN-FKM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
128. A Novel ICESat-2 Signal Photon Extraction Method Based on Convolutional Neural Network
- Author
-
Wenjun Qin, Yan Song, Yarong Zou, Haitian Zhu, and Haiyan Guan
- Subjects
ICESat-2 ,signal photon extraction ,photon data transformation ,GoogLeNet ,CBAM ,Science - Abstract
When it comes to the application of the photon data gathered by the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2), accurately removing noise is crucial. In particular, conventional denoising algorithms based on local density are susceptible to missing some signal photons when there is uneven signal density distribution, as well as being susceptible to misclassifying noise photons near the signal photons; the application of deep learning remains untapped in this domain as well. To solve these problems, a method for extracting signal photons based on a GoogLeNet model fused with a Convolutional Block Attention Module (CBAM) is proposed. The network model can make good use of the distribution information of each photon’s neighborhood, and simultaneously extract signal photons with different photon densities to avoid misclassification of noise photons. The CBAM enhances the network to focus more on learning the crucial features and improves its discriminative ability. In the experiments, simulation photon data in different signal-to-noise ratios (SNR) levels are utilized to demonstrate the superiority and accuracy of the proposed method. The results from signal extraction using the proposed method in four experimental areas outperform the conventional methods, with overall accuracy exceeding 98%. In the real validation experiments, reference data from four experimental areas are collected, and the elevation of signal photons extracted by the proposed method is proven to be consistent with the reference elevation, with R2 exceeding 0.87. Both simulation and real validation experiments demonstrate that the proposed method is effective and accurate for extracting signal photons.
- Published
- 2024
- Full Text
- View/download PDF
129. Person Re-identification Method Based on GoogLeNet-GMP Based on Vector Attention Mechanism
- Author
-
MENG Yue-bo, MU Si-rong, LIU Guang-hui, XU Sheng-jun, HAN Jiu-qiang
- Subjects
person re-identification ,attention mechanism ,googlenet ,spatial pyramid pooling ,loss function ,Computer software ,QA76.75-76.765 ,Technology (General) ,T1-995 - Abstract
In order to improve the accuracy and applicability of person re-identification(Re-ID),a Re-ID method based on vector attention mechanism GoogLeNet is proposed.Firstly,three groups of images(anchor,positive and negative) are input into the GoogLeNet-GMP network to obtain segmented feature vectors.Then,spatial pyramid pooling(SPP) is used to aggregate the features from different pyramid levels,and attention mechanism is introduced.By integrating the multi-scale pooling regions which represent the visual information of the target,the distinguishable features on multiple semantic levels are obtained.At the same time,the mixed form of two different loss functions is taken as the final loss function.Experiments on Market-15012 and Duke-MTMC3 data set show that the proposed method performs better in Rank-1 and mAP indicators than other excellent methods.
- Published
- 2022
- Full Text
- View/download PDF
130. Performance evaluation of gamma‐ray irradiated silicone rubber nano‐micro composites using electrical, thermal, physiochemical and deep learning techniques.
- Author
-
Paul, Moutusi, Mishra, Palash, Vinod, Pabbati, Sarathi, Ramanujam, and Mondal, Mithun
- Subjects
- *
CONVOLUTIONAL neural networks , *SILICONE rubber , *NUCLEAR power plants , *ATOMIC force microscopy , *SURFACE potential , *DEEP learning , *THERMOGRAVIMETRY , *FOOD emulsions - Abstract
Silicone rubber insulation used in nuclear power stations is inevitably exposed to highly energetic gamma‐ray irradiations which may lead to premature failure. Therefore, it is mandatory to develop silicone rubber composites capable of withstanding the stress posed by gamma‐ray irradiation. The present research involves the development of micro aluminum tri hydrate (ATH) and nano silica (SiO2) doped/co‐doped silicone rubber nanocomposites best suited for use in nuclear power stations. The samples were exposed to gamma‐ray irradiation of 500 kGy. Surface changes incurred were assessed using atomic force microscopy (AFM), fourier transform infrared (FTIR) spectroscopy, contact angle, surface potential/impedance, water droplet‐initiated corona inception voltage (CIV) measurements thermogravimetric analysis (TGA) was adopted to characterize the thermal stability of the specimens. Further moisture absorption behavior of the test specimens under strong electrolytic NaCl solution has been investigated. The erosion resistance of the virgin and gamma‐ray irradiated specimens has been classified using GoogLeNet convolution neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
131. A Comparative Study of Different CNN Models and Transfer Learning Effect for Underwater Object Classification in Side-Scan Sonar Images.
- Author
-
Du, Xing, Sun, Yongfu, Song, Yupeng, Sun, Huifeng, and Yang, Lei
- Subjects
- *
SONAR imaging , *DEEP learning , *CONVOLUTIONAL neural networks , *AUTOMATIC target recognition , *IMAGE recognition (Computer vision) , *SONAR - Abstract
With the development of deep learning techniques, convolutional neural networks (CNN) are increasingly being used in image recognition for marine surveys and underwater object classification. Automatic recognition of targets on side-scan sonar (SSS) images using CNN can improve recognition accuracy and efficiency. However, the vast selection of CNN models makes it challenging to select models for target recognition in SSS images. Therefore, this paper aims to compare different CNN models' prediction accuracy and computational performance comprehensively. First, four traditional CNN models were applied to train and predict the same submarine SSS dataset using both the original model and models with transfer learning methods. Then, we examined and studied the prediction accuracy and computation performance of four CNN models. Results showed that transfer learning enhances the accuracy of all CNN models, with lesser improvements for AlexNet and VGG-16 and greater improvements for GoogleNet and ResNet101. GoogleNet has the highest prediction of accuracy (100% in the train dataset and 94.27% in the test dataset) and good computational difficulty. The findings of this work are useful for future model selection in target recognition in SSS images. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
132. Using Wavelet Analysis and Deep Learning for EMG-Based Hand Movement Signal Classification.
- Author
-
GÜNEŞ, Harun and AKKAYA, Abdullah Erhan
- Subjects
- *
WAVELETS (Mathematics) , *SIGNAL classification , *HAND signals , *DEEP learning , *WAVELET transforms , *SIGNAL processing , *ELECTROMYOGRAPHY - Abstract
In this study; time series electromyography (EMG) data have been classified according to hand movements using wavelet analysis and deep learning. A pre-trained deep CNN (Convolitonal Neural Network-GoogLeNet) has been used in the classification process performed with signal processing, by this way the results can be obtained by continuous wavelet transform and classification methods. The dataset used has been taken from the Machine Learning Repository at the University of California. In the data set; EMG data of 5 healthy individuals, 2 males and 3 females, of the same age (~20-22 years) are available. Data; It consists of grasping spherical objects (Spher), grasping small objects with fingertips (Tip), grasping objects with palms (Palm), grasping thin/flat objects (Lat), grasping cylindrical objects (Cyl) and holding heavy objects (Hook). It is desired to perform 6 hand movements at the same time. While these movements are necessary, speed and power depend on one's will. People perform each movement for 6 seconds and repeat each movement (action) 30 times. The CWT (Continuous Wavelet Transform) method was used to transform the signal into an image. The scalogram image of the signal was created using the CWT method and the generated images were collected in a data set folder. The collected scalogram images have been classified using GoogLeNet, a deep learning network model. With GoogLeNet, results with 97.22% and 88.89% accuracy rates were obtained by classifying the scalogram images of the signals received separately from channel 1 and channel 2 in the data set. The applied model can be used to classify EMG signals in EMG data with high success rate. In this study, 80% of data was used for educational purposes and 20% for validation purposes. In the study, the results of the classification processes have been evaluated separately for first and second channel data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
133. Derin Öğrenme ile Göğüs Röntgeni Görüntülerinden COVID-19 ve Viral Pnömoni Tespiti.
- Author
-
TÜFEKCİ, Pınar and GEZİCİ, Burak
- Published
- 2023
- Full Text
- View/download PDF
134. Diagnosis of Autism Spectrum Disorder Using Convolutional Neural Networks.
- Author
-
Hendr, Amna, Ozgunalp, Umar, and Erbilek Kaya, Meryem
- Subjects
CONVOLUTIONAL neural networks ,AUTISM spectrum disorders ,MACHINE learning ,DEEP learning ,DIAGNOSIS - Abstract
Autism spectrum disorder as a condition has posed significant early diagnosis challenges to the medical and health community for a long time. The early diagnosis of ASD is crucial for early intervention and adequate management of the condition. Several kinds of literature have shown that children with ASD have varying degrees of challenges in handwriting tasks; hence, this research has proposed the creation of a handwritten dataset of both ASD and non-ASD subjects for deep learning classification. The created dataset is based on a series of handwritten tasks given to subjects such as drawing and writing. The dataset was used to propose a deep learning automated ASD diagnosis method. Using the GoogleNet transfer learning algorithm, each handwritten task in the dataset is trained and classified for each subject. This is done because in real-life scenarios an ASD subject may not comply to performing and finishing all handwritten tasks. Using a training and testing ratio of 80:20, a total of 104 subjects' handwritten tasks were used as input for training and classification, and it is shown that the proposed approach can correctly classify ASD with an accuracy of 90.48%, where sensitivity, specificity, and F1 score are calculated as 80%, 100%, and 100%, respectively. The results of our proposed method exhibit an impressive performance and indicate that the use of handwritten tasks has a significant potential for the early diagnosis of ASD. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
135. Recognition and Classification of Handwritten Urdu Numerals Using Deep Learning Techniques.
- Author
-
Bhatti, Aamna, Arif, Ameera, Khalid, Waqar, Khan, Baber, Ali, Ahmad, Khalid, Shehzad, and Rehman, Atiq ur
- Subjects
OPTICAL character recognition ,DEEP learning ,NUMERALS ,CONVOLUTIONAL neural networks ,PATTERN recognition systems ,SUPPORT vector machines ,VECTOR valued functions - Abstract
Urdu is a complex language as it is an amalgam of many South Asian and East Asian languages; hence, its character recognition is a huge and difficult task. It is a bidirectional language with its numerals written from left to right while script is written in opposite direction which induces complexities in the recognition process. This paper presents the recognition and classification of a novel Urdu numeral dataset using convolutional neural network (CNN) and its variants. We propose custom CNN model to extract features which are used by Softmax activation function and support vector machine (SVM) classifier. We compare it with GoogLeNet and the residual network (ResNet) in terms of performance. Our proposed CNN gives an accuracy of 98.41% with the Softmax classifier and 99.0% with the SVM classifier. For GoogLeNet, we achieve an accuracy of 95.61% and 96.4% on ResNet. Moreover, we develop datasets for handwritten Urdu numbers and numbers of Pakistani currency to incorporate real-life problems. Our models achieve best accuracies as compared to previous models in the literature for optical character recognition (OCR). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
136. Inter-frame video forgery detection using UFS-MSRC algorithm and LSTM network.
- Author
-
Girish, N. and Nandini, C.
- Subjects
FORGERY ,FEATURE selection ,FEATURE extraction ,VIDEO coding ,VIDEOS ,ALGORITHMS ,VIDEO surveillance - Abstract
The forgery involved in region duplication is a common type of video tampering, where the traditional techniques used to detect video tampering are ineffective and inefficient for the forged videos under complex backgrounds. To overcome this issue, a novel video forgery detection model is introduced in this research paper. Initially, the input video sequences are collected from Surrey University Library for Forensic Analysis (SULFA) and Sondos datasets. Further, spatiotemporal averaging method is carried out on the collected video sequences to obtain background information with a pale of moving objects for an effective video forgery detection. Next, feature extraction is performed using the GoogLeNet model for extracting the feature vectors. Then, the Unsupervised Feature Selection with Multi-Subspace Randomization and Collaboration (UFS-MSRC) approach is used to choose the discriminative feature vectors that superiorly reduce the training time and improve the detection accuracy. Finally, long short-term memory (LSTM) network is applied for forgery detection in the different video sequences. The experimental evaluation illustrated that the UFS-MSRC with LSTM model attained 98.13% and 97.38% of accuracy on SULFA and Sondos datasets, where the obtained results are better when compared to the existing models in video forgery detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
137. ULTRASOUND IMAGE SYNTHETIC GENERATING USING DEEP CONVOLUTION GENERATIVE ADVERSARIAL NETWORK FOR BREAST CANCER IDENTIFICATION.
- Author
-
Haq, Dina Zatusiva and Fatichah, Chastine
- Subjects
- *
ULTRASONIC imaging , *BREAST cancer , *GENERATIVE adversarial networks , *CONVOLUTIONAL neural networks , *ACCURACY - Abstract
Breast cancer is the leading cause of death in women worldwide; prevention of possible death from breast cancer can be decreased by early identification ultrasound image analysis by classifying ultrasound images into three classes (Normal, Benign, and Malignant), where the dataset used has imbalanced data. Imbalanced data cause the classification system only to recognize the majority class, so it is necessary to handle imbalanced data. In this study, imbalanced data can be handled by implementing the Deep Convolution Generative Adversarial Network (DCGAN) method as the addition of synthetic images to the training data. The DCGAN method generates synthetic images with feature learning on a Convolutional Neural Network (CNN), making DCGAN more stable than the basic generative adversarial network method. Synthetic and original images were further classified using the CNN GoogleNet method, which performs well in image classification and with reasonable computation cost. Synthetic ultrasound images were generated using a tuning hyperparameter in the DCGAN method to adjust the input size on GoogleNet for imbalanced data handling. From the experiment result, the implementation of DCGAN-GoogleNet has a higher accuracy in handling imbalanced data than conventional augmentation and other previous research, with an accuracy value reaching 91.61%, which is 1% to 4% higher than the accuracy value in the previous method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
138. A Deep Learning Approach for Diabetic Foot Ulcer Classification and Recognition.
- Author
-
Ahsan, Mehnoor, Naz, Saeeda, Ahmad, Riaz, Ehsan, Haleema, and Sikandar, Aisha
- Subjects
- *
DEEP learning , *DIABETIC foot , *LEG amputation , *DIABETES complications , *DATA augmentation , *CLASSIFICATION - Abstract
Diabetic foot ulcer (DFU) is one of the major complications of diabetes and results in the amputation of lower limb if not treated timely and properly. Despite the traditional clinical approaches used in DFU classification, automatic methods based on a deep learning framework show promising results. In this paper, we present several end-to-end CNN-based deep learning architectures, i.e., AlexNet, VGG16/19, GoogLeNet, ResNet50.101, MobileNet, SqueezeNet, and DenseNet, for infection and ischemia categorization using the benchmark dataset DFU2020. We fine-tune the weight to overcome a lack of data and reduce the computational cost. Affine transform techniques are used for the augmentation of input data. The results indicate that the ResNet50 achieves the highest accuracy of 99.49% and 84.76% for Ischaemia and infection, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
139. PCG signal classification using a hybrid multi round transfer learning classifier.
- Author
-
Ismail, Shahid and Ismail, Basit
- Subjects
SIGNAL classification ,FEATURE extraction ,SIGNAL sampling - Abstract
Diagnosis of cardiovascular diseases using Phonocardiography(PCG) is a challenging task as signal itself is cyclo-stationary. It has spectral contents which are overlapped by multiple sources having similar spectral contents but acting as noise. Moreover, length variation in the signals and sampling using different equipment also make analysis of these signal a testing task. In this research, authors have introduced a hybrid technique to counter the variations just mentioned. Our technique is composed of high resolution spectrum generation, conversion of spectral contents to Spectrogram and multi round training. Use of fixed length spectral contents makes system independent of signal length. By using Spectrogram, the deep features can be extracted from spectrum which are used as an input to Pre-trained networks (PTNs). Finally, transfer learning is applied with multiple rounds of training. The introduced methodology is validated using multiple datasets having different PCG signals, sampling frequency, signals length and signal quality. From the reported results, it is evident that Chirplet Z transform (CZT) based Spectrogram can be utilized for mutlticlass classification. If CZT based Spectrograms are passed through multi rounds of training, then accuracy can be further increased. The reported results are accurate to 99 % in the case of testing for best case scenarios and even in worst case, the results dont fall below 85 %. However, an important observation is that they are consistent across the experimental protocols. The computational cost associated with the introduced technique is low which makes it suitable for hardware implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
140. An efficient image dahazing using Googlenet based convolution neural networks.
- Author
-
G, Harish Babu and N, Venkatram
- Subjects
CONVOLUTIONAL neural networks ,DIGITAL cameras ,DEEP learning - Abstract
The dehazing is a significant colour image-processing technique for attaining a high quality of images from haze images. Now a day's digital cameras are playing an important key role in many applications, such as scanning, HD image generation, traffic user, tourists, especially in hilly areas satellite and radar applications. The dehazing is a complex function for digital cameras since it converts a bayers mosaic image into a final color image and then estimates the output image. The full colour image cannot be reconstructed from incomplete samples due to haze problem, and hence appropriate dehazing models are implemented to overcome this problem. In this work, a dehazing algorithm is proposed with GoogleNet deep learning mechanism for getting the improved quality of the image. In this investigation, GoogleNet deep learning model is used to reconstruct the full color image without degrading the sensitivity and resolution. In this proposed work, the deep learning based convolutional networks are realized using demosaicking for pre-processing to reproduce the dehaged full color images from the incomplete samples. In this demosaicking task, the first step is demosaicking to produce a rough image consists of unwanted color artifacts. Second step is the refining step, in which, the deep residual estimation is used to decrese the color artifacts along with the multi model fusion concept to produce good quality output images. The performance measures, viz., Peak-Signal to Noise Ratio (PSNR), Structural similarity Index (SSIM), and Mean Square Error (MSE) are evaluated and compared with existing models. The PSNR is 32.78, SSIM is 0.9412, MSE is 0.098, F1-score is 0.989, sensitivity is 0.972, and CC is 0.978 have been attained by this optimized algorithm. The GoogleNet technology outperforms the existing methods. This deep learning mechanism does process the input hazy images and decomposes the smoothness hazy free elements from texture elements. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
141. A Deep Learning-Based Model for Tree Species Identification Using Pollen Grain Images.
- Author
-
Minowa, Yasushi, Shigematsu, Koharu, and Takahara, Hikaru
- Subjects
POLLEN ,FOSSIL pollen ,MACHINE learning ,DATA augmentation ,DEEP learning ,SPECIES - Abstract
The objective of this study was to develop a deep learning-based tree species identification model using pollen grain images taken with a camera mounted on an optical microscope. From five focal points, we took photographs of pollen collected from tree species widely distributed in the Japanese archipelago, and we used these to produce pollen images. We used Caffe as the deep learning framework and AlexNet and GoogLeNet as the deep learning algorithms. We constructed four learning models that combined two learning patterns, one for focal point images with data augmentation, for which the training and test data were the same, and the other without data augmentation, for which they were not the same. The performance of the proposed model was evaluated according to the MCC and F score. The most accurate classification model was based on the GoogLeNet algorithm, with data augmentation after 200 epochs. Tree species identification accuracy varied depending on the focal point, even for the same pollen grain, and images focusing on the pollen surface tended to be more accurately classified than those focusing on the pollen outline and membrane structure. Castanea crenata, Fraxinus sieboldiana, and Quercus crispula pollen grains were classified with the highest accuracy, whereas Gamblea innovans, Carpinus tschonoskii, Cornus controversa, Fagus japonica, Quercus serrata, and Quercus sessilifolia showed the lowest classification accuracy. Future studies should consider application to fossil pollen in sediments and state-of-the-art deep learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
142. Decimal Digits Recognition from Lip Movement Using GoogleNet network.
- Author
-
Naif, Kwakib Saadun and Hashim, Kadhim Mahdi
- Subjects
LIPREADING ,DECIMAL system ,ARTIFICIAL neural networks ,FEATURE extraction ,IMAGE recognition (Computer vision) - Abstract
Lip reading is a visual way to communicate with people through the movement of the lips, especially the hearing impaired and people who are in noisy environments such as stadiums and airports. Lip reading is not easy to face many difficulties, especially when taking a video of the person, including lighting, rotation, the person’s position and different skin colors...etc. As researchers are constantly looking for new techniques for lip-reading. The main objective of the paper is to design and implement an effective system for identifying decimal digits by movement. Our proposed system consists of two stages, namely, preprocessing, in which the face and mouth area are detected, lips are determined and stored in a temporary folder to used viola jones. The second stage is to take a GoogleNet neural network and insert the flange frame in it, where the features will be extracted in the convolutional layer and then the classification process where the results were convincing and we obtained an accuracy of 87% by using a database consisting of 35 videos and it contained seven males and two females, and the number of the frame was 21,501 lips image. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
143. Design of Voice Recognition Acoustic Compression System Based on Neural Network.
- Author
-
Xiwen, Yuan
- Subjects
DEEP learning ,CONVOLUTIONAL neural networks ,MOBILE apps ,SAMPLING theorem ,ACOUSTIC models - Abstract
The deep neural network halts its application to mobile devices because of its high complexity. So, it motivates us to compress and accelerate the deep neural network model. In order to improve the operating effect of the voice recognition acoustic compression system, this paper improves on the traditional neural network, and stitches the transfer features of multiple deep convolutional neural networks together to obtain a more discriminative feature representation than the transfer learning features in a single convolutional neural network. Also to construct a voice recognition acoustic compression system based on deep convolutional neural networks, this paper combines the actual needs. After the system framework construction, the performance and recognition accuracy of the voice acoustic system are studied from two perspectives. The experimental findings show that the voice recognition acoustic compression system constructed in this paper has excellent performance. The voice data processing speed of the voice recognition acoustic compression model is 111(s) and the average accuracy is 94.1%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
144. Modifikasi Convolutional Neural Network Arsitektur GoogLeNet dengan Dull Razor Filtering untuk Klasifikasi Kanker Kulit
- Author
-
Sofia Saidah, I Putu Yowan Nugraha Suparta, and Efri Suhartono
- Subjects
akurasi ,cnn ,f-1 score ,googlenet ,kanker kulit ,loss ,precision ,recall ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Kulit merupakan organ luar terluas yang menutupi tubuh manusia. Akibat intensitas tinggi paparan lingkungan luar, kulit dapat mengalami berbagai masalah kesehatan, salah satunya adalah kanker kulit. Deteksi dini dibutuhkan agar penanganan lebih lanjut kepada pasien dapat segera dilakukan. Pemanfaatan artificial intelligence (AI)-based solution pada pengolahan citra kanker kulit dapat digunakan untuk mendeteksi adanya potensi kanker kulit. Pada makalah ini, dilakukan klasifikasi jenis kanker kulit jinak dan ganas dengan memanfaatkan metode convolutional neural network (CNN) arsitektur GoogLeNet. Arsitektur GoogLeNet memiliki keunggulan dengan adanya inception module yang memungkinkan proses konvolusi dan pooling berjalan secara paralel, yang dapat memperpendek waktu komputasi, sehingga mempercepat proses klasifikasi tanpa mengurangi akurasi sistem. Penelitian ini terdiri atas beberapa tahapan, dimulai dengan akusisi data sejumlah enam ratus data citra kanker kulit yang diperoleh dari situs Kaggle.com, kemudian dilakukan penyeragaman ukuran masukan serta pemanfaatan dull razor filtering untuk mengurangi derau citra masukan akibat rambut-rambut halus yang tumbuh di sepanjang epidermis kulit. Setelah proses preprocessing selesai dilakukan, arsitektur GoogLeNet memproses masukan citra, lalu mengklasifikasikan masukan ke dalam kategori kanker kulit jinak (benign) atau kanker kulit ganas (malignant). Kinerja sistem kemudian diuji dengan parameter kinerja, seperti presisi, recall, dan f-1 score, serta dibandingkan dengan metode serupa. Sistem berhasil memperoleh hasil yang memuaskan, di antaranya adalah akurasi 97,73% dan loss 1,7063. Sementara itu, untuk parameter presisi, recall, dan f-1 score, masing-masing diperoleh nilai rata-rata 0,98. Kinerja sistem yang diusulkan berhasil mendapatkan akurasi yang lebih baik dibandingkan dengan penelitian terdahulu dengan penggunaan dataset yang jauh lebih sedikit. Hasil pengujian ini menunjukkan bahwa metode CNN mampu melakukan deteksi dan klasifikasi pada kanker kulit secara akurat, sehingga diharapkan metode ini dapat membantu pekerja medis dalam melakukan diagnosis kepada masyarakat luas.
- Published
- 2022
- Full Text
- View/download PDF
145. Cataract Classification Based on Fundus Images Using Convolutional Neural Network
- Author
-
Richard Bina Jadi Simanjuntak, Yunendah Fu’adah, Rita Magdalena, Sofia Saidah, Abel Bima Wiratama, and Ibnu Da’wan Salim Ubaidah
- Subjects
cataract ,convolutional neural network ,googlenet ,mobilenet ,resnet. ,Computer software ,QA76.75-76.765 - Abstract
A cataract is a disease that attacks the eye's lens and makes it difficult to see. Cataracts can occur due to hydration of the lens (addition of fluid) or denaturation of proteins in the lens. Cataracts that are not treated properly can lead to blindness. Therefore, early detection needs to be done to provide appropriate treatment according to the level of cataracts experienced. In this study, a comparison of cataract classification based on fundus images using GoogleNet, MobileNet, ResNet, and the proposed Convolutional Neural Network was carried out. We compared four CNN architectures when implementing the Adam optimizer with a learning rate of 0.001. The data used are 399 datasets and augmented to 3200 data. This test's best and most stable results were obtained from the proposed CNN model with 92% accuracy, followed by MobileNet at 92%, ResNet at 93%, and GoogLeNet at 86%. We also make comparisons with previous research. Most of the previous studies only used two to three class categories. In this study, the system was improved by increasing system classifies into four categories: Normal, Immature, Mature, and Hypermature. In addition, the accuracy obtained is also quite good compared to previous studies using manual feature extraction. This study is expected to help medical staff to carry out early detection of cataracts to prevent the dangerous effect of cataracts and appropriate medical treatment. In the future, we want to expand the number of datasets to improve the classification accuracy of the cataract detection system.
- Published
- 2022
- Full Text
- View/download PDF
146. Revealing the Potential of Deep Learning for Detecting Submarine Pipelines in Side-Scan Sonar Images: An Investigation of Pre-Training Datasets
- Author
-
Xing Du, Yongfu Sun, Yupeng Song, Lifeng Dong, and Xiaolong Zhao
- Subjects
side-scan sonar ,convolutional neural networks ,transfer learning ,geological survey ,GoogleNet ,Yellow River Estuary ,Science - Abstract
This study introduces a novel approach to the critical task of submarine pipeline or cable (POC) detection by employing GoogleNet for the automatic recognition of side-scan sonar (SSS) images. The traditional interpretation methods, heavily reliant on human interpretation, are replaced with a more reliable deep-learning-based methodology. We explored the enhancement of model accuracy via transfer learning and scrutinized the influence of three distinct pre-training datasets on the model’s performance. The results indicate that GoogleNet facilitated effective identification, with accuracy and precision rates exceeding 90%. Furthermore, pre-training with the ImageNet dataset increased prediction accuracy by about 10% compared to the model without pre-training. The model’s prediction ability was best promoted by pre-training datasets in the following order: Marine-PULSE ≥ ImageNet > SeabedObjects-KLSG. Our study shows that pre-training dataset categories, dataset volume, and data consistency with predicted data are crucial factors affecting pre-training outcomes. These findings set the stage for future research on automatic pipeline detection using deep learning techniques and emphasize the significance of suitable pre-training dataset selection for CNN models.
- Published
- 2023
- Full Text
- View/download PDF
147. Deep Learning in Quadratic Frequency Modulated Thermal Wave Imaging for Automatic Defect Detection
- Author
-
Vesala, G. T., Ghali, V. S., Naik, R. B., Vijaya Lakshmi, A., Suresh, B., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Bajpai, Manish Kumar, editor, Kumar Singh, Koushlendra, editor, and Giakos, George, editor
- Published
- 2021
- Full Text
- View/download PDF
148. A Machine Learning Driven Android Based Mobile Application for Flower Identification
- Author
-
Islam, Towhidul, Absar, Nurul, Adamov, Abzetdin Z., Khandaker, Mayeen Uddin, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Mahmud, Mufti, editor, Kaiser, M. Shamim, editor, Kasabov, Nikola, editor, Iftekharuddin, Khan, editor, and Zhong, Ning, editor
- Published
- 2021
- Full Text
- View/download PDF
149. Research on Intrusion Detection Method Based on PGoogLeNet-IDS Model
- Author
-
Sun, Min, Hao, Xue, Li, Wenbin, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Wang, Wei, editor, Mu, Jiasong, editor, Liu, Xin, editor, Na, Zhenyu, editor, and Cai, Xiantao, editor
- Published
- 2021
- Full Text
- View/download PDF
150. Control of Bio-Inspired Multi-robots Through Gestures Using Convolutional Neural Networks in Simulated Environment
- Author
-
Saraiva, A. A., Santos, D. B. S., Fonseca Ferreira, Nuno M., Boaventura-Cunha, José, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Gonçalves, José Alexandre, editor, Braz-César, Manuel, editor, and Coelho, João Paulo, editor
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.