209 results on '"YOLO V3"'
Search Results
2. Copy-Move Image Multiple Forgery Detection Based on Transit Flow Regime Algorithm-Enabled ShuffleNet.
- Author
-
Chaitra, B. and Bhaskar Reddy, P. V.
- Subjects
- *
FEATURE extraction , *FORGERY , *PYRAMIDS - Abstract
The copy-move forgery is considered one among difficult kinds of image forgeries that must be detected. This occurs by duplicating image parts or portions and thereafter adding up again in an image by itself but in other locations. The forgery detection techniques are utilized in image protection while an actual image is not obtainable. The current forgery detection methods detect tampered areas with lesser effectiveness owing to larger size as well as low contrast of images. Here, transit flow regime algorithm-based ShuffleNet (TFRA-ShuffleNet) is presented for multiple forgery detection. In this work, the input image considered is given to multiple object detection. The multiple object detection is carried out in an input image utilizing YOLO V3. After that, features are extracted from object detected image that include local vector pattern (LVP), local optimal-oriented pattern (LOOP), pyramid histogram of oriented gradients (PHoG), local Gabor XOR patterns (LGXP), local directional pattern (LDP), local directional ternary pattern (LDTP) and local binary pattern (LBP). Lastly, multiple forgery detection is conducted employing ShuffleNet. The ShuffleNet is trained to employ TFRA, which is an integration of transit search (TS) and flow regime algorithm (FRA). Furthermore, TFRA-ShuffleNet achieved maximal accuracy, TPR and TNR values of about 96.5%, 96.5% and 97.5%. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. HFR-YOLO v3 model for multi-object detection and unscented Kalman filtering based object tracking in vehicle driving footage
- Author
-
Manickaraj, Dhanalakshmi and Mann, Palvinder Singh
- Published
- 2025
- Full Text
- View/download PDF
4. An efficient object detection by autonomous vehicle using deep learning.
- Author
-
Kolukula, Nitalaksheswara Rao, Kalapala, Rajendra Prasad, Ivaturi, Sundara Siva Rao, Tammineni, Ravi Kumar, Annavarapu, Mahalakshmi, and Pyla, Uma
- Subjects
OBJECT recognition (Computer vision) ,MACHINE learning ,CONVOLUTIONAL neural networks ,COMPUTER vision ,DEEP learning - Abstract
The automation industries have been developing since the first demonstration in the period 1980 to 2000 it is mainly used on automated driving vehicle. Now a day's automotive companies, technology companies, government bodies, research institutions and academia, investors and venture capitalists are interested in autonomous vehicles. In this work, object detection on road is proposed, which uses deep learning (DL) algorithms. You only look once (YOLO V3, V4, V5). In this system object detection on the road data set is taken as input and the objects are mainly on-road vehicles, traffic signals, cars, trucks and buses. These inputs are given to the models to predict and detect the objects. The Performance of the proposed system is compared with performance of deep learning algorithms convolution neural network (CNN). The proposed system accuracy greater than 76.5% to 93.3%, mean average precision (Map) and frame per second (FPS) are 0.895 and 43.95%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. WHO-YOLO NET: soil prediction and classification based on YOLOV3 with whale optimization.
- Author
-
Subramani, Sangeetha and Suganthi, N.
- Abstract
Soil prediction techniques help to determine whether a particular crop will grow in a given area. The use of deep learning algorithms with complex algorithms can result in highly accurate soil prediction and crop recommendation. Manual soil classification in the laboratory is both time-consuming and cost-effective, but inaccurate. However, the deep learning structures are trained with limited number of datasets for crop recommendation based on the soil types. To address the aforementioned issues, a unique WHO-YOLO Net for predicting various kinds of soil and compatible crops has been created on the crop database. First, the input soil images are pre-processed utilizing improved weighted thresholded histogram equalization (IWTHE), which is applied to improve the overall quality of the input images. The proposed YOLO V3 predicts and classifies soils as red soil, sandy soil, silt soil, peat soil, clay soil, black soil, chernozem soil, loam soil, alluvial soil, and yellow soil. To achieve improved categorization results, the whale optimization is applied to the YOLO V3. The proposed WHO-YOLO Net achieves a high accuracy of 99.15% for predicting soil types. The proposed WHO-YOLO net method obtained an accuracy of 79%, 80.58%, 82.25%, and 97% better than Bag-of-features and CSMO, HGSO, OSMO, and DCNN, respectively. The investigation results reveal that the model accurately predicts soil types when compared with other existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Yolo V3 for Market MBFVS Food Materials Detection
- Author
-
Kuan, Ta-Wen, Yu, Xiaodong, Wang, Qi, Wang, Yihan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Lin, Jerry Chun-Wei, editor, Shieh, Chin-Shiuh, editor, Horng, Mong-Fong, editor, and Chu, Shu-Chuan, editor
- Published
- 2024
- Full Text
- View/download PDF
7. An advanced deep learning method to detect and classify diabetic retinopathy based on color fundus images.
- Author
-
Akella, Prasanna Lakshmi and Kumar, R.
- Subjects
- *
DIABETIC retinopathy , *DEEP learning , *PEOPLE with diabetes , *RETINAL imaging , *COLOR , *CHRONIC diseases - Abstract
Background: In this article, we present a computerized system for the analysis and assessment of diabetic retinopathy (DR) based on retinal fundus photographs. DR is a chronic ophthalmic disease and a major reason for blindness in people with diabetes. Consistent examination and prompt diagnosis are the vital approaches to control DR. Methods: With the aim of enhancing the reliability of DR diagnosis, we utilized the deep learning model called You Only Look Once V3 (YOLO V3) to recognize and classify DR from retinal images. The DR was classified into five major stages: normal, mild, moderate, severe, and proliferative. We evaluated the performance of the YOLO V3 algorithm based on color fundus images. Results: We have achieved high precision and sensitivity on the train and test data for the DR classification and mean average precision (mAP) is calculated on DR lesion detection. Conclusions: The results indicate that the suggested model distinguishes all phases of DR and performs better than existing models in terms of accuracy and implementation time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Hot Spot Detection of Thermal Infrared Image of Photovoltaic Power Station Based on Multi-Task Fusion.
- Author
-
Xu Han, Xianhao Wang, Chong Chen, Gong Li, and Changhao Piao
- Abstract
The manual inspection of photovoltaic (PV) panels to meet the requirements of inspection work for large-scale PV power plants is challenging. We present a hot spot detection and positioning method to detect hot spots in batches and locate their latitudes and longitudes. First, a network based on the YOLOv3 architecture was utilized to identify hot spots. The innovation is to modify the RU_1 unit in the YOLOv3 model for hot spot detection in the far field of view and add a neural network residual unit for fusion. In addition, because of the misidentification problem in the infrared images of the solar PV panels, the DeepLab v3+ model was adopted to segment the PV panels to filter out the misidentification caused by bright spots on the ground. Finally, the latitude and longitude of the hot spot are calculated according to the geometric positioning method utilizing known information such as the drone's yaw angle, shooting height, and lens field-of-view. The experimental results indicate that the hot spot recognition rate accuracy is above 98%. When keeping the drone 25 m off the ground, the hot spot positioning error is at the decimeter level. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Research on Target Detection Algorithm of Unmanned Surface Vehicle Based on Deep Learning
- Author
-
Huang, Fan, Chen, Yong, Pan, Xinlong, Wang, Haipeng, Fang, Heng, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Pan, Linqiang, editor, Zhao, Dongming, editor, Li, Lianghao, editor, and Lin, Jianqing, editor
- Published
- 2023
- Full Text
- View/download PDF
10. Face Mask Detection Using YOLOv3
- Author
-
Singh, Devesh, Kumar, Himanshu, Meena, Shweta, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Saraswat, Mukesh, editor, Chowdhury, Chandreyee, editor, Kumar Mandal, Chintan, editor, and Gandomi, Amir H., editor
- Published
- 2023
- Full Text
- View/download PDF
11. Two Wheeler Rider Support System
- Author
-
Mahore, Gunendra, Solanki, Sonam, Barua, Ritik, Mahore, Rupesh, di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Anjaneyulu, M. V. L. R., editor, Harikrishna, M., editor, Arkatkar, Shriniwas S., editor, and Veeraragavan, A., editor
- Published
- 2023
- Full Text
- View/download PDF
12. Real-Time Object Detection System with Voice Feedback for the Blind People
- Author
-
Shah, Harshal, Amin, Meet, Dadwani, Krish, Desai, Nishant, Chatiwala, Aliasgar, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Zhang, Yu-Dong, editor, Senjyu, Tomonobu, editor, So-In, Chakchai, editor, and Joshi, Amit, editor
- Published
- 2023
- Full Text
- View/download PDF
13. Vehicle-camel collisions in Saudi Arabia: Application of single and multi-stage deep learning object detectors
- Author
-
Saleh Alghamdi, Abdullah Algethami, and Ting Tan
- Subjects
Object detection ,Vehicle-camel collision ,Yolo v3 ,Yolo v4 ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Vehicle-camel collision is a persistent issue in countries where population of camels is high such as Saudi Arabia. The purpose of the research is to introduce a new solution to eliminate this issue. Previous solutions, such as fencing the sides of the roads, designing better camel warning signs and fining camel owners when camels cross high traffic roads, are either expensive, ineffective, or hard to implement. Therefore, in this work, we harness the power of deep learning to tackle this problem. In particular, we use state-of-the-art deep learning object detectors to detect camels on roads with high accuracy. Results show that all implemented models were capable of detecting camels on or near roads. Moreover, the single-stage detector Yolo v3 was found to be the most accurate and is as fast as its successor Yolo v4. Findings of this work helped select the deep learning model needed for a reliable and automatic vehicle-camel collision avoidance system.
- Published
- 2024
- Full Text
- View/download PDF
14. Vehicle target detection method based on improved YOLO V3 network model.
- Author
-
Qirong Zhang, Zhong Han, and Yu Zhang
- Subjects
AERIAL photography ,VEHICLE models ,PROBLEM solving ,TRACKING radar - Abstract
For the problem of insufficient small target detection ability of the existing network model, a vehicle target detection method based on the improved YOLO V3 network model is proposed in the article. The improvement of the algorithm model can effectively improve the detection ability of small target vehicles in aerial photography. The optimization and adjustment of the anchor box and the improvement of the network residual module have improved the small target detection effect of the algorithm. Furthermore, the introduction of the rectangular prediction frame with orientation angles into the model of this article can improve the vehicle positioning efficiency of the algorithm, greatly reduce the problem of wrong detection and missed detection of vehicles in the model, and provide ideas for solving related problems. Experiments show that the accuracy rate of the improved algorithm model is 89.3%. Compared to the YOLO V3 algorithm, it is improved by 15.9%. The recall rate is improved by 16%, and the F1 value is also improved by 15.9%, which greatly increased the detection efficiency of aerial vehicles. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Real-time biodiversity analysis using deep-learning algorithms on mobile robotic platforms.
- Author
-
Panigrahi, Siddhant, Maski, Prajwal, and Thondiyath, Asokan
- Subjects
MOBILE operating systems ,MOBILE learning ,MARK & recapture (Population biology) ,ENVIRONMENTAL monitoring ,ALGORITHMS ,ANIMAL populations ,BIODIVERSITY monitoring ,BIODIVERSITY - Abstract
Ecological biodiversity is declining at an unprecedented rate. To combat such irreversible changes in natural ecosystems, biodiversity conservation initiatives are being conducted globally. However, the lack of a feasible methodology to quantify biodiversity in real-time and investigate population dynamics in spatiotemporal scales prevents the use of ecological data in environmental planning. Traditionally, ecological studies rely on the census of an animal population by the "capture, mark and recapture" technique. In this technique, human field workers manually count, tag and observe tagged individuals, making it time-consuming, expensive, and cumbersome to patrol the entire area. Recent research has also demonstrated the potential for inexpensive and accessible sensors for ecological data monitoring. However, stationary sensors collect localised data which is highly specific on the placement of the setup. In this research, we propose the methodology for biodiversity monitoring utilising state-of-the-art deep learning (DL) methods operating in real-time on sample payloads of mobile robots. Such trained DL algorithms demonstrate a mean average precision (mAP) of 90.51% in an average inference time of 67.62 milliseconds within 6,000 training epochs. We claim that the use of such mobile platform setups inferring real-time ecological data can help us achieve our goal of quick and effective biodiversity surveys. An experimental test payload is fabricated, and online as well as offline field surveys are conducted, validating the proposed methodology for species identification that can be further extended to geo-localisation of flora and fauna in any ecosystem. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Identification of traffic signs for advanced driving assistance systems in smart cities using deep learning.
- Author
-
Dhawan, Kshitij, R, Srinivasa Perumal, and R. K., Nadesh
- Subjects
TRAFFIC signs & signals ,DEEP learning ,CONVOLUTIONAL neural networks ,SMART cities ,TRAFFIC safety ,CITY traffic ,SPEED limits - Abstract
The ability of Advanced Driving Assistance Systems (ADAS) is to identify and understand all objects around the vehicle under varying driving conditions and environmental factors is critical. Today's vehicles are equipped with advanced driving assistance systems that make driving safer and more comfortable. A camera mounted on the car helps the system recognise and detect traffic signs and alerts the driver about various road conditions, like if construction work is ahead or if speed limits have changed. The goal is to identify the traffic sign and process the image in a minimal processing time. A custom convolutional neural network model is used to classify the traffic signs with higher accuracy than the existing models. Image augmentation techniques are used to expand the dataset artificially, and that allows one to learn how the image looks from different perspectives, such as when viewed from different angles or when it looks blurry due to poor weather conditions. The algorithms used to detect traffic signs are YOLO v3 and YOLO v4-tiny. The proposed solution for detecting a specific set of traffic signs performed well, with an accuracy rate of 95.85%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. 融合 YOLO v3 与改进 ReXNet 的手势识别方法研究.
- Author
-
魏小玉, 焦良葆, 刘子恒, 汤博宇, and 孟琳
- Abstract
Copyright of Computer Measurement & Control is the property of Magazine Agency of Computer Measurement & Control and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
18. 基于 YOLO v3 的卷烟用瓦楞纸箱 表面缺陷检测方法.
- Author
-
贾伟萍, 褚 玮, 刘文婷, 黄 轲, 李陈巧, and 吴 飞
- Subjects
SURFACE defects ,CIGARETTES ,ALGORITHMS - Abstract
Copyright of China Pulp & Paper is the property of China Pulp & Paper Magazines Publisher and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
19. Automatic Medical Face Mask Recognition for COVID-19 Mitigation: Utilizing YOLO V5 Object Detection.
- Author
-
Dewi, Christine and Christanto, Henoch Juli
- Abstract
The ongoing COVID-19 pandemic has significantly affected global public health, necessitating protective measures such as wearing face masks to reduce the spread of the disease. Recent advances in deep learning-based object detection have shown promise in accurately recognizing objects within images and videos. In this study, the state-of-the-art You Only Look Once (YOLO) V5 object detection model was employed to classify individuals based on their mask-wearing status into three categories: none, poor, and adequate. YOLO V5 is known for its high efficiency and precision in object recognition tasks. Two datasets, the Face Mask Dataset (FMD) and the Medical Mask Dataset (MMD), were combined for simultaneous evaluation. The performance of the models was assessed based on crucial metrics such as Giga-Floating Point Operations (GFLOPS), workspace area, detection time, and mean average precision (mAP). Results indicated that the YOLO V5m model achieved the highest mAP (97.2%) for the "adequate" class, demonstrating its effectiveness in detecting proper mask usage for COVID-19 mitigation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Comparative Study of Various Algorithms for Vehicle Detection and Counting in Traffic
- Author
-
John, Anand, Meva, Divyakant, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Rajagopal, Sridaran, editor, Faruki, Parvez, editor, and Popat, Kalpesh, editor
- Published
- 2022
- Full Text
- View/download PDF
21. Improved Yolo V3 for Steel Surface Defect Detection
- Author
-
Zheng, Jiexin, Zhuang, Zeyang, Liao, Tao, Chen, Lihong, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Liu, Qi, editor, Liu, Xiaodong, editor, Cheng, Jieren, editor, Shen, Tao, editor, and Tian, Yuan, editor
- Published
- 2022
- Full Text
- View/download PDF
22. Design and Implementation of a Monitoring System for COVID-19-Free Working Environment
- Author
-
Tarannum, Attar, Safrulla, Pathan, Kishore, Lalith, Kalaivani, S., Xhafa, Fatos, Series Editor, Raj, Jennifer S., editor, Kamel, Khaled, editor, and Lafata, Pavel, editor
- Published
- 2022
- Full Text
- View/download PDF
23. Ship Recognition Technology of High Precision Remote Sensing Images Based on YOLO
- Author
-
Ni, Rui, Hua, Bing, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Yan, Liang, editor, and Yu, Xiang, editor
- Published
- 2022
- Full Text
- View/download PDF
24. A Traffic Sign Recognition Method Based on Improved YOLOv3
- Author
-
Fan, Wenshuo, Yi, Nanqiao, Hu, Yongzhong, Xhafa, Fatos, Series Editor, and Li, Xiaolong, editor
- Published
- 2022
- Full Text
- View/download PDF
25. Real-time biodiversity analysis using deep-learning algorithms on mobile robotic platforms
- Author
-
Siddhant Panigrahi, Prajwal Maski, and Asokan Thondiyath
- Subjects
UAV ,YOLO v3 ,Deep learning ,Object detection ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Ecological biodiversity is declining at an unprecedented rate. To combat such irreversible changes in natural ecosystems, biodiversity conservation initiatives are being conducted globally. However, the lack of a feasible methodology to quantify biodiversity in real-time and investigate population dynamics in spatiotemporal scales prevents the use of ecological data in environmental planning. Traditionally, ecological studies rely on the census of an animal population by the “capture, mark and recapture” technique. In this technique, human field workers manually count, tag and observe tagged individuals, making it time-consuming, expensive, and cumbersome to patrol the entire area. Recent research has also demonstrated the potential for inexpensive and accessible sensors for ecological data monitoring. However, stationary sensors collect localised data which is highly specific on the placement of the setup. In this research, we propose the methodology for biodiversity monitoring utilising state-of-the-art deep learning (DL) methods operating in real-time on sample payloads of mobile robots. Such trained DL algorithms demonstrate a mean average precision (mAP) of 90.51% in an average inference time of 67.62 milliseconds within 6,000 training epochs. We claim that the use of such mobile platform setups inferring real-time ecological data can help us achieve our goal of quick and effective biodiversity surveys. An experimental test payload is fabricated, and online as well as offline field surveys are conducted, validating the proposed methodology for species identification that can be further extended to geo-localisation of flora and fauna in any ecosystem.
- Published
- 2023
- Full Text
- View/download PDF
26. Bolt Installation Defect Detection Based on a Multi-Sensor Method.
- Author
-
An, Shizhao, Xiao, Muzheng, Wang, Da, Qin, Yan, and Fu, Bo
- Subjects
- *
INDUSTRIAL robots , *TORQUEMETERS , *INDUSTRIALIZATION , *PROBLEM solving , *TORQUE - Abstract
With the development of industrial automation, articulated robots have gradually replaced labor in the field of bolt installation. Although the installation efficiency has been improved, installation defects may still occur. Bolt installation defects can considerably affect the mechanical properties of structures and even lead to safety accidents. Therefore, in order to ensure the success rate of bolt assembly, an efficient and timely detection method of incorrect or missing assembly is needed. At present, the automatic detection of bolt installation defects mainly depends on a single type of sensor, which is prone to mis-inspection. Visual sensors can identify the incorrect or missing installation of bolts, but it cannot detect torque defects. Torque sensors can only be judged according to the torque and angel information, but cannot accurately identify the incorrect or missing installation of bolts. To solve this problem, a detection method of bolt installation defects based on multiple sensors is proposed. The trained YOLO (You Only Look Once) v3 network is used to judge the images collected by the visual sensor, and the recognition rate of visual detection is up to 99.75%, and the average confidence of the output is 0.947. The detection speed is 48 FPS, which meets the real-time requirement. At the same time, torque and angle sensors are used to judge the torque defects and whether bolts have slipped. Combined with the multi-sensor judgment results, this method can effectively identify defects such as missing bolts and sliding teeth. Finally, this paper carried out experiments to identify bolt installation defects such as incorrect, missing torque defects, and bolt slips. At this time, the traditional detection method based on a single type of sensor cannot be effectively identified, and the detection method based on multiple sensors can be accurately identified. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. OBJECT DETECTION USING SEMI SUPERVISED LEARNING METHODS
- Author
-
Shymala Gowri Selvaganapathy, N. Hema Priya, P.D. Rathika, and K. Venkatachalam
- Subjects
deep learning algorithms ,yolo v3 ,faster rcnn ,tensorflow data apis ,Computer engineering. Computer hardware ,TK7885-7895 - Abstract
Object detection is used to identify objects in real time using some deep learning algorithms. In this work, wheat plant data set around the world is collected to study the wheat heads. Using global data, a common solution for measuring the amount and size of wheat heads is formulated. YOLO V3 (You Look Only Once Version 3) and Faster RCNN is a real time object detection algorithm which is used to identify objects in videos and images. The global wheat detection dataset is used for the prediction which contains 3000+ training images and few test images with csv files which have information about the ground box labels of the images. To build a data pipeline for the model Tensorflow data API or Keras Data Generators is used.
- Published
- 2022
- Full Text
- View/download PDF
28. A Recognition Method of Ewe Estrus Crawling Behavior Based on Multi-Target Detection Layer Neural Network.
- Author
-
Yu, Longhui, Guo, Jianjun, Pu, Yuhai, Cen, Honglei, Li, Jingbin, Liu, Shuangyin, Nie, Jing, Ge, Jianbing, Yang, Shuo, Zhao, Hangxing, Xu, Yalei, Wu, Jianglin, and Wang, Kang
- Subjects
- *
ESTRUS , *ASPECT ratio (Images) , *EWES , *SHEEP breeds , *SHEEP breeding , *ANIMAL culture - Abstract
Simple Summary: The timely and accurate detection of ewe estrus behavior in precision animal husbandry is an important research topic. The timely detection of estrus ewes in mutton sheep breeding can not only protect the welfare of ewes themselves, but also improve the interests of breeding enterprises. With the continuous increase in the scale of mutton sheep breeding and the gradual intensification of breeding methods, the traditional manual detection methods require high labor intensity, and the contact sensor detection methods will cause stress reaction problems in ewes. In recent years, the rapid development of deep learning technology has brought on new possibilities. We propose a method for the recognition ewe estrus based on a multi-target detection layer neural network. The results show that the method can meet the requirements of timely and accurate detection of ewe estrus behavior in large-scale mutton sheep breeding. There are some problems with estrus detection in ewes in large-scale meat sheep farming: mainly, the manual detection method is labor-intensive and the contact sensor detection method causes stress reactions in ewes. To solve the abovementioned problems, we proposed a multi-objective detection layer neural network-based method for ewe estrus crawling behavior recognition. The approach we proposed has four main parts. Firstly, to address the problem of mismatch between our constructed ewe estrus dataset and the YOLO v3 anchor box size, we propose to obtain a new anchor box size by clustering the ewe estrus dataset using the K-means++ algorithm. Secondly, to address the problem of low model recognition precision caused by small imaging of distant ewes in the dataset, we added a 104 × 104 target detection layer, making the total target detection layer reach four layers, strengthening the model's ability to learn shallow information and improving the model's ability to detect small targets. Then, we added residual units to the residual structure of the model, so that the deep feature information of the model is not easily lost and further fused with the shallow feature information to speed up the training of the model. Finally, we maintain the aspect ratio of the images in the data-loading module of the model to reduce the distortion of the image information and increase the precision of the model. The experimental results show that our proposed model has 98.56% recognition precision, while recall was 98.04%, F1 value was 98%, mAP was 99.78%, FPS was 41 f/s, and model size was 276 M, which can meet the accurate and real-time recognition of ewe estrus behavior in large-scale meat sheep farming. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Improved Neural Network with Spatial Pyramid Pooling and Online Datasets Preprocessing for Underwater Target Detection Based on Side Scan Sonar Imagery.
- Author
-
Li, Jinrui, Chen, Libin, Shen, Jian, Xiao, Xiongwu, Liu, Xiaosong, Sun, Xin, Wang, Xiao, and Li, Deren
- Subjects
- *
DEEP learning , *MACHINE learning , *PYRAMIDS , *SONAR , *SONAR imaging , *SUBMARINE topography - Abstract
Fast and high-accuracy detection of underwater targets based on side scan sonar images has great potential for marine fisheries, underwater security, marine mapping, underwater engineering and other applications. The following problems, however, must be addressed when using low-resolution side scan sonar images for underwater target detection: (1) the detection performance is limited due to the restriction on the input of multi-scale images; (2) the widely used deep learning algorithms have a low detection effect due to their complex convolution layer structures; (3) the detection performance is limited due to insufficient model complexity in training process; and (4) the number of samples is not enough because of the bad dataset preprocessing methods. To solve these problems, an improved neural network for underwater target detection—which is based on side scan sonar images and fully utilizes spatial pyramid pooling and online dataset preprocessing based on the You Look Only Once version three (YOLO V3) algorithm—is proposed. The methodology of the proposed approach is as follows: (1) the AlexNet, GoogleNet, VGGNet and the ResNet networks and an adopted YOLO V3 algorithm were the backbone networks. The structure of the YOLO V3 model is more mature and compact and has higher target detection accuracy and better detection efficiency than the other models; (2) spatial pyramid pooling was added at the end of the convolution layer to improve detection performance. Spatial pyramid pooling breaks the scale restrictions when inputting images to improve feature extraction because spatial pyramid pooling enables the backbone network to learn faster at high accuracy; and (3) online dataset preprocessing based on YOLO V3 with spatial pyramid pooling increases the number of samples and improves the complexity of the model to further improve detection process performance. Three-side scan imagery datasets were used for training and were tested in experiments. The quantitative evaluation using Accuracy, Recall, Precision, mAP and F1-Score metrics indicates that: for the AlexNet, GoogleNet, VGGNet and ResNet algorithms, when spatial pyramid pooling is added to their backbone networks, the average detection accuracy of the three sets of data was improved by 2%, 4%, 2% and 2%, respectively, as compared to their original formulations. Compared with the original YOLO V3 model, the proposed ODP+YOLO V3+SPP underwater target detection algorithm model has improved detection performance through the mAP qualitative evaluation index has increased by 6%, the Precision qualitative evaluation index has increased by 13%, and the detection efficiency has increased by 9.34%. These demonstrate that adding spatial pyramid pooling and online dataset preprocessing can improve the target detection accuracy of these commonly used algorithms. The proposed, improved neural network with spatial pyramid pooling and online dataset preprocessing based on the YOLO V3 method achieves the highest scores for underwater target detection results for sunken ships, fish flocks and seafloor topography, with mAP scores of 98%, 91% and 96% for the above three kinds of datasets, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Occlusion aware underwater object tracking using hybrid adaptive deep SORT -YOLOv3 approach.
- Author
-
Mathias, Ajisha, Dhanalakshmi, Samiappan, and Kumar, R.
- Subjects
OBJECT recognition (Computer vision) ,OBJECT tracking (Computer vision) ,DEEP learning ,DIFFRACTIVE scattering ,TRACKING algorithms ,LIGHT scattering ,OPTICAL diffraction - Abstract
Underwater object tracking and recognition are challenging due to the distinctive characteristics of underwater environments. The water medium exhibits diffraction and scattering of light when it travels deep in the water. This results in unclear, occluded videos and images further causing challenges in interpretation. Tracking the object of interest from the consecutive frames in underwater scenarios often lead to occlusion of objects. Addressing these issues, an improved hybrid adaptive deep SORT-YOLOv3 (HADSYv3) detection and tracking scheme for occluded underwater objects is proposed. The neural network-based training model YOLOv3 is applied to extract and categorize the underwater object. An adaptive deep SORT algorithm with the long-short term memory (LSTM) based deep learning approach is used to determine the position of the objects in the underwater sequences. The proposed Hybrid Adaptive DeepSORT-YOLOv3 (HADSYv3) method incorporates both the YOLOv3 algorithm meant for object detection and the adaptive deep SORT algorithm for tracking applications. Though the deep SORT algorithm track objects in real time applications standalone, if it gets combined with suitable detection schemes, the overall efficiency can be improved by confirming all the possible detections. The proposed method is compared with other state of art underwater object recognition schemes and the occluded object detection for various view angles is evaluated quantitatively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Deep Learning Based Stabbing Action Detection in ATM Kiosks for Intelligent Video Surveillance Applications
- Author
-
Yogameena, B., Menaka, K., Perumaal, S. Saravana, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Singh, Satish Kumar, editor, Roy, Partha, editor, Raman, Balasubramanian, editor, and Nagabhushan, P., editor
- Published
- 2021
- Full Text
- View/download PDF
32. Pig Target Detection from Image Based on Improved YOLO V3
- Author
-
Yin, Dan, Tang, Wengsheng, Chen, Peng, Yang, Bo, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Sun, Xingming, editor, Zhang, Xiaorui, editor, Xia, Zhihua, editor, and Bertino, Elisa, editor
- Published
- 2021
- Full Text
- View/download PDF
33. Multiple Object Detection and Tracking Using Deep Learning
- Author
-
Burde, Shreyas, Budihal, Suneeta V., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Sabut, Sukanta Kumar, editor, Ray, Arun Kumar, editor, Pati, Bibudhendu, editor, and Acharya, U Rajendra, editor
- Published
- 2021
- Full Text
- View/download PDF
34. Rapid Earthquake Assessment from Satellite Imagery Using RPN and Yolo v3
- Author
-
Panday, Sanjeeb Prasad, Karn, Saurav Lal, Joshi, Basanta, Shakya, Aman, Pandey, Rom Kant, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Jo, Kang-Hyun, editor, Li, Jianqiang, editor, Gribova, Valeriya, editor, and Bevilacqua, Vitoantonio, editor
- Published
- 2021
- Full Text
- View/download PDF
35. Stroke risk study based on deep learning-based magnetic resonance imaging carotid plaque automatic segmentation algorithm
- Author
-
Ya-Fang Chen, Zhen-Jie Chen, You-Yu Lin, Zhi-Qiang Lin, Chun-Nuan Chen, Mei-Li Yang, Jin-Yin Zhang, Yuan-zhe Li, Yi Wang, and Yin-Hui Huang
- Subjects
stroke risk ,MRI carotid plaque ,deep learning ,transfer learning ,YOLO V3 ,Diseases of the circulatory (Cardiovascular) system ,RC666-701 - Abstract
IntroductionThe primary factor for cardiovascular disease and upcoming cardiovascular events is atherosclerosis. Recently, carotid plaque texture, as observed on ultrasonography, is varied and difficult to classify with the human eye due to substantial inter-observer variability. High-resolution magnetic resonance (MR) plaque imaging offers naturally superior soft tissue contrasts to computed tomography (CT) and ultrasonography, and combining different contrast weightings may provide more useful information. Radiation freeness and operator independence are two additional benefits of M RI. However, other than preliminary research on MR texture analysis of basilar artery plaque, there is currently no information addressing MR radiomics on the carotid plaque.MethodsFor the automatic segmentation of MRI scans to detect carotid plaque for stroke risk assessment, there is a need for a computer-aided autonomous framework to classify MRI scans automatically. We used to detect carotid plaque from MRI scans for stroke risk assessment pre-trained models, fine-tuned them, and adjusted hyperparameters according to our problem.ResultsOur trained YOLO V3 model achieved 94.81% accuracy, RCNN achieved 92.53% accuracy, and MobileNet achieved 90.23% in identifying carotid plaque from MRI scans for stroke risk assessment. Our approach will prevent incorrect diagnoses brought on by poor image quality and personal experience.ConclusionThe evaluations in this work have demonstrated that this methodology produces acceptable results for classifying magnetic resonance imaging (MRI) data.
- Published
- 2023
- Full Text
- View/download PDF
36. Computer Vision Based Pothole Detection under Challenging Conditions.
- Author
-
Bučko, Boris, Lieskovská, Eva, Zábovská, Katarína, and Zábovský, Michal
- Subjects
- *
POTHOLES (Roads) , *COMPUTER vision , *ROAD maintenance , *ELECTRONIC data processing , *COMPUTER simulation , *DETECTORS - Abstract
Road discrepancies such as potholes and road cracks are often present in our day-to-day commuting and travel. The cost of damage repairs caused by potholes has always been a concern for owners of any type of vehicle. Thus, an early detection processes can contribute to the swift response of road maintenance services and the prevention of pothole related accidents. In this paper, automatic detection of potholes is performed using the computer vision model library, You Look Only Once version 3, also known as Yolo v3. Light and weather during driving naturally affect our ability to observe road damage. Such adverse conditions also negatively influence the performance of visual object detectors. The aim of this work was to examine the effect adverse conditions have on pothole detection. The basic design of this study is therefore composed of two main parts: (1) dataset creation and data processing, and (2) dataset experiments using Yolo v3. Additionally, Sparse R-CNN was incorporated into our experiments. For this purpose, a dataset consisting of subsets of images recorded under different light and weather was developed. To the best of our knowledge, there exists no detailed analysis of pothole detection performance under adverse conditions. Despite the existence of newer libraries, Yolo v3 is still a competitive architecture that provides good results with lower hardware requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Deep convolutional neural network for enhancing traffic sign recognition developed on Yolo V4.
- Author
-
Dewi, Christine, Chen, Rung-Ching, Jiang, Xiaoyi, and Yu, Hui
- Subjects
ARTIFICIAL neural networks ,TRAFFIC signs & signals ,CONVOLUTIONAL neural networks ,DRIVER assistance systems ,INTELLIGENT transportation systems - Abstract
Traffic sign detection (TSD) is a key issue for smart vehicles. Traffic sign recognition (TSR) contributes beneficial information, including directions and alerts for advanced driver assistance systems (ADAS) and Cooperative Intelligent Transport Systems (CITS). Traffic signs are tough to detect in practical autonomous driving scenes using an extremely accurate real-time approach. Object detection methods such as Yolo V4 and Yolo V4-tiny consolidated with Spatial Pyramid Pooling (SPP) are analyzed in this paper. This work evaluates the importance of the SPP principle in boosting the performance of Yolo V4 and Yolo V4-tiny backbone networks in extracting features and learning object features more effectively. Both models are measured and compared with crucial measurement parameters, including mean average precision (mAP), working area size, detection time, and billion floating-point number (BFLOPS). Experiments show that Yolo V4_1 (with SPP) outperforms the state-of-the-art schemes, achieving 99.4% accuracy in our experiments, along with the best total BFLOPS (127.26) and mAP (99.32%). In contrast with earlier studies, the Yolo V3 SPP training process only receives 98.99% accuracy for mAP with IoU 90.09. The training mAP rises by 0.44% with Yolo V4_1 (mAP 99.32%) in our experiment. Further, SPP can enhance the achievement of all models in the experiment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Comparison of RetinaNet, SSD, and YOLO v3 for real-time pill identification
- Author
-
Lu Tan, Tianran Huangfu, Liyao Wu, and Wenying Chen
- Subjects
Convolutional neural network ,RetinaNet ,SSD ,YOLO v3 ,Pill identification ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background The correct identification of pills is very important to ensure the safe administration of drugs to patients. Here, we use three current mainstream object detection models, namely RetinaNet, Single Shot Multi-Box Detector (SSD), and You Only Look Once v3(YOLO v3), to identify pills and compare the associated performance. Methods In this paper, we introduce the basic principles of three object detection models. We trained each algorithm on a pill image dataset and analyzed the performance of the three models to determine the best pill recognition model. The models were then used to detect difficult samples and we compared the results. Results The mean average precision (MAP) of RetinaNet reached 82.89%, but the frames per second (FPS) is only one third of YOLO v3, which makes it difficult to achieve real-time performance. SSD does not perform as well on the indicators of MAP and FPS. Although the MAP of YOLO v3 is slightly lower than the others (80.69%), it has a significant advantage in terms of detection speed. YOLO v3 also performed better when tasked with hard sample detection, and therefore the model is more suitable for deployment in hospital equipment. Conclusion Our study reveals that object detection can be applied for real-time pill identification in a hospital pharmacy, and YOLO v3 exhibits an advantage in detection speed while maintaining a satisfactory MAP.
- Published
- 2021
- Full Text
- View/download PDF
39. Airport and Ship Target Detection on Satellite Images Based on YOLO V3 Network
- Author
-
Ying, Ren, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Wang, Liheng, editor, Wu, Yirong, editor, and Gong, Jianya, editor
- Published
- 2020
- Full Text
- View/download PDF
40. Bird Detection on Transmission Lines Based on DC-YOLO Model
- Author
-
Zou, Cong, Liang, Yong-quan, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Tröltzsch, Fredi, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Reis, Ricardo, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Shi, Zhongzhi, editor, Vadera, Sunil, editor, and Chang, Elizabeth, editor
- Published
- 2020
- Full Text
- View/download PDF
41. Communication Base-Station Antenna Detection Algorithm Based on YOLOv3-Darknet Network
- Author
-
Xu, Ying, Li, Ruiming, Zhou, Jihua, Zheng, Yu, Ke, Qirui, Zhi, Yihang, Guan, Huixin, Wu, Xi, Zhai, Yikui, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Xhafa, Fatos, editor, Patnaik, Srikanta, editor, and Tavana, Madjid, editor
- Published
- 2020
- Full Text
- View/download PDF
42. Deep Learning-Based Segmentation of Key Objects of Transmission Lines
- Author
-
Liu, Mingjie, Li, Yongteng, Wang, Xiao, Tu, Renwei, Zhu, Zhongjie, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nunes, Nuno J., editor, Ma, Lizhuang, editor, Wang, Meili, editor, Correia, Nuno, editor, and Pan, Zhigeng, editor
- Published
- 2020
- Full Text
- View/download PDF
43. A Lightweight Neural Network-Based Method for Detecting Estrus Behavior in Ewes.
- Author
-
Yu, Longhui, Pu, Yuhai, Cen, Honglei, Li, Jingbin, Liu, Shuangyin, Nie, Jing, Ge, Jianbing, Lv, Linze, Li, Yali, Xu, Yalei, Guo, Jianjun, Zhao, Hangxing, and Wang, Kang
- Subjects
ESTRUS ,EWES ,ARTIFICIAL neural networks ,SHEEP breeds ,SHEEP breeding ,SHEEP ranches - Abstract
We propose a lightweight neural network-based method to detect the estrus behavior of ewes. Our suggested method is mainly proposed to solve the problem of not being able to detect ewe estrus behavior in a timely and accurate manner in large-scale meat sheep farms. The three main steps of our proposed methodology include constructing the dataset, improving the network structure, and detecting the ewe estrus behavior based on the lightweight network. First, the dataset was constructed by capturing images from videos with estrus crawling behavior, and the data enhancement was performed to improve the generalization ability of the model at first. Second, the original Darknet-53 was replaced with the EfficientNet-B0 for feature extraction in YOLO V3 neural network to make the model lightweight and the deployment easier, thus shortening the detection time. In order to further obtain a higher accuracy of detecting the ewe estrus behavior, we joined the feature layers to the SENet attention module. Finally, the comparative results demonstrated that the proposed method had higher detection accuracy and FPS, as well as a smaller model size than the YOLO V3. The precision of the proposed scheme was 99.44%, recall was 95.54%, F1 value was 97%, AP was 99.78%, FPS was 48.39 f/s, and Model Size was 40.6 MB. This study thus provides an accurate, efficient, and lightweight detection method for the ewe estrus behavior in large-scale mutton sheep breeding. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. A Combined Approach for Accurate and Accelerated Teeth Detection on Cone Beam CT Images.
- Author
-
Du, Mingjun, Wu, Xueying, Ye, Ye, Fang, Shuobo, Zhang, Hengwei, and Chen, Ming
- Subjects
- *
CONE beam computed tomography , *COMPUTED tomography , *CONVOLUTIONAL neural networks , *TEETH - Abstract
Teeth detection and tooth segmentation are essential for processing Cone Beam Computed Tomography (CBCT) images. The accuracy decides the credibility of the subsequent applications, such as diagnosis, treatment plans in clinical practice or other research that is dependent on automatic dental identification. The main problems are complex noises and metal artefacts which would affect the accuracy of teeth detection and segmentation with traditional algorithms. In this study, we proposed a teeth-detection method to avoid the problems above and to accelerate the operation speed. In our method, (1) a Convolutional Neural Network (CNN) was employed to classify layer classes; (2) images were chosen to perform Region of Interest (ROI) cropping; (3) in ROI regions, we used a YOLO v3 and multi-level combined teeth detection method to locate each tooth bounding box; (4) we obtained tooth bounding boxes on all layers. We compared our method with a Faster R-CNN method which was commonly used in previous studies. The training and prediction time were shortened by 80% and 62% in our method, respectively. The Object Inclusion Ratio (OIR) metric of our method was 96.27%, while for the Faster R-CNN method, it was 91.40%. When testing images with severe noise or with different missing teeth, our method promises a stable result. In conclusion, our method of teeth detection on dental CBCT is practical and reliable for its high prediction speed and robust detection. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. OBJECT DETECTION USING SEMI SUPERVISED LEARNING METHODS.
- Author
-
Selvaganapathy, Shymala Gowri, Hema Priya, N., Rathika, P. D., and Venkatachalam, K.
- Subjects
OBJECT recognition (Computer vision) ,MACHINE learning ,DEEP learning ,WHEAT ,DATA modeling ,APPLICATION program interfaces - Abstract
Object detection is used to identify objects in real time using some deep learning algorithms. In this work, wheat plant data set around the world is collected to study the wheat heads. Using global data, a common solution for measuring the amount and size of wheat heads is formulated. YOLO V3 (You Look Only Once Version 3) and Faster RCNN is a real time object detection algorithm which is used to identify objects in videos and images. The global wheat detection dataset is used for the prediction which contains 3000+ training images and few test images with csv files which have information about the ground box labels of the images. To build a data pipeline for the model Tensorflow data API or Keras Data Generators is used. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Low-Cost Smart Glasses for Blind Individuals using Raspberry Pi 2.
- Author
-
Cortez, Gracie Dan V., Valenton, Jan Cesar D., and Ibarra, Joseph Bryan G.
- Subjects
OPTICAL head-mounted displays ,BLIND people ,PEOPLE with visual disabilities ,APPARATUS for the blind ,WEARABLE technology - Abstract
Blindness is the inability of an individual to perceive light. Due to the lack of visual sense, blind individuals use guiding tools and human assistance to replace their visual impairment. The study developed a smart glass that can detect objects and text signages and give an audio output as a guiding tool for blind individuals. In creating the prototype, Raspberry Pi 2 Model B will use as the microprocessor. It will be using a camera module that will be a tool for detection. The algorithms used for object detection and text detection are YOLOv3 and OCR, respectively. In-text detection, OCR helps recognize both handwritten and digitalized texts. MATLAB is the software used for the application of OCR. It is composed of three parts (3): image capturing, extraction of text, and conversion of text-to-speech. In object detection, YOLOv3 is the algorithm used in the process. It comprises four (4) parts: data collection, data preparations, model training, and inference. Then the conversion of text-to-speech will take into place. The objects that the prototype can detect are limited to 15 objects only. The prototype can function at both the 150 lux luminance and 107527 luminance in object detection. However, there are discrepancies in the detection of some objects due to distance; the detection cannot detect the specific thing at certain trials. In-text detection, the detection of the text signage has 100% reliability. In addition, text detection used five font styles. In the testing, the font style Calibri has a 30% percentage error (using the word ENTRANCE) and a 20% percentage error (using the phrase EXIT) due to the structure. The processing time of the prototype has an average time of 1.916s at maximal walking and 1.673s at a slow pace walking. [ABSTRACT FROM AUTHOR]
- Published
- 2022
47. Infusion port level detection for intravenous infusion based on Yolo v3 neural network
- Author
-
Zeyong Huang, Yuhong Li, Tingting Zhao, Peng Ying, Ying Fan, and Jun Li
- Subjects
deep learning ,liquid level detection ,yolo v3 ,image processing ,intravenous infusion ,Biotechnology ,TP248.13-248.65 ,Mathematics ,QA1-939 - Abstract
Purpose: In order to improve the accuracy of liquid level detection in intravenous left auxiliary vein infusion and reduce the pain of patients with blood returning from intravenous infusion, we propose a deep learning based liquid level detection model of infusion levels to facilitate this operation. Method: We implemented a Yolo v3-based detection model of infusion level images in intravenous infusion, and at the same time, compare it with SURF image processing technique, RCNN, and Fast-RCNN methods. Results: The model in this paper is better than the comparison algorithm in Intersection over Union (IoU), precision, recall and test time. The liquid level detection model based on Yolo v3 has a precision of 0.9768, a recall rate of 0.9688, an IoU of 0.8943, and a test time of 2.9 s. Conclusion: The experimental results prove that the liquid level detection method based on deep learning has the characteristics of high accuracy and good real-time performance. This method can play a certain auxiliary role in the hospital environment and improve work efficiency of medical workers.
- Published
- 2021
- Full Text
- View/download PDF
48. Intelligent recognition method of borehole image fractures for coal and rock based on YOLO v3
- Author
-
SU Yutong, YANG Weiyi, LI Junlin
- Subjects
borehole image ,fracture recognition ,yolo v3 ,deep learning ,residual network ,Mining engineering. Metallurgy ,TN1-997 - Abstract
Based on the detection and recognition technology of YOLO v3 deep convolution neural network, an automatic recognition method of digital borehole image fractures is proposed. Firstly, the target detection principle of new version YOLO v3 is described in detail, then the borehole image of coal mine is selected to make data sets on VOC 2007, and the network structure of Darknet-53 is used for data training. The experimental results show that the detection method of borehole image fractures based on YOLO v3 can identify the feature information quickly and accurately, which provides a new technical support for the visual recognition of the surrounding rock fractures.
- Published
- 2021
- Full Text
- View/download PDF
49. Railway Track Monitoring Using Deep Learning Techniques: A Preview
- Author
-
Bhat, Shreetha and Karegowda, Asha Gowda
- Published
- 2020
- Full Text
- View/download PDF
50. Yolo V4 for Advanced Traffic Sign Recognition With Synthetic Training Data Generated by Various GAN
- Author
-
Christine Dewi, Rung-Ching Chen, Yan-Ting Liu, Xiaoyi Jiang, and Kristoko Dwi Hartomo
- Subjects
DCGAN ,LSGAN ,synthetic images ,traffic sign ,WGAN ,Yolo V3 ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Convolutional Neural Networks (CNN) achieves perfection in traffic sign identification with enough annotated training data. The dataset determines the quality of the complete visual system based on CNN. Unfortunately, databases for traffic signs from the majority of the world’s nations are few. In this scenario, Generative Adversarial Networks (GAN) may be employed to produce more realistic and varied training pictures to supplement the actual arrangement of images. The purpose of this research is to describe how the quality of synthetic pictures created by DCGAN, LSGAN, and WGAN is determined. Our work combines synthetic images with original images to enhance datasets and verify the effectiveness of synthetic datasets. We use different numbers and sizes of images for training. Likewise, the Structural Similarity Index (SSIM) and Mean Square Error (MSE) were employed to assess picture quality. Our study quantifies the SSIM difference between the synthetic and actual images. When additional images are used for training, the synthetic image exhibits a high degree of resemblance to the genuine image. The highest SSIM value was achieved when using 200 total images as input and $32\times 32$ image size. Further, we augment the original picture dataset with synthetic pictures and compare the original image model to the synthesis image model. For this experiment, we are using the latest iterations of Yolo, Yolo V3, and Yolo V4. After mixing the real image with the synthesized image produced by LSGAN, the recognition performance has been improved, achieving an accuracy of 84.9% on Yolo V3 and an accuracy of 89.33% on Yolo V4.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.