110 results on '"real-time recognition"'
Search Results
2. Transparent triboelectric nanogenerators with high flexibility for human-interactive sensing and real-time monitoring
- Author
-
Wan, Jiajia, Zeng, Xiaoxue, Chen, Wenlong, Zong, Yuting, Li, Peng, Chen, Zhenming, Yin, Xianze, and Huang, Junjun
- Published
- 2025
- Full Text
- View/download PDF
3. Triboelectric signal enhancement via interface structural design and integrated with deep learning for real-time online material recognition
- Author
-
Shen, Cheng, Chen, Jingyi, Liu, Yue, Chen, Zhenming, and Huang, Junjun
- Published
- 2024
- Full Text
- View/download PDF
4. SLFCNet: an ultra-lightweight and efficient strawberry feature classification network
- Author
-
Wenchao Xu, Yangxu Wang, and Jiahao Yang
- Subjects
Strawberry ,Lightweight ,Detection and classification ,Real-time recognition ,Automated management ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Background As modern agricultural technology advances, the automated detection, classification, and harvesting of strawberries have become an inevitable trend. Among these tasks, the classification of strawberries stands as a pivotal juncture. Nevertheless, existing object detection methods struggle with substantial computational demands, high resource utilization, and reduced detection efficiency. These challenges make deployment on edge devices difficult and lead to suboptimal user experiences. Methods In this study, we have developed a lightweight model capable of real-time detection and classification of strawberry fruit, named the Strawberry Lightweight Feature Classify Network (SLFCNet). This innovative system incorporates a lightweight encoder and a self-designed feature extraction module called the Combined Convolutional Concatenation and Sequential Convolutional (C3SC). While maintaining model compactness, this architecture significantly enhances its feature decoding capabilities. To evaluate the model’s generalization potential, we utilized a high-resolution strawberry dataset collected directly from the fields. By employing image augmentation techniques, we conducted experimental comparisons between manually counted data and the model’s inference-based detection and classification results. Results The SLFCNet model achieves an average precision of 98.9% in the mAP@0.5 metric, with a precision rate of 94.7% and a recall rate of 93.2%. Notably, SLFCNet features a streamlined design, resulting in a compact model size of only 3.57 MB. On an economical GTX 1080 Ti GPU, the processing time per image is a mere 4.1 ms. This indicates that the model can smoothly run on edge devices, ensuring real-time performance. Thus, it emerges as a novel solution for the automation and management of strawberry harvesting, providing real-time performance and presenting a new solution for the automatic management of strawberry picking.
- Published
- 2025
- Full Text
- View/download PDF
5. Automatic Quality Assessment of Pork Belly via Deep Learning and Ultrasound Imaging.
- Author
-
Wang, Tianshuo, Yang, Huan, Zhang, Chunlei, Chao, Xiaohuan, Liu, Mingzheng, Chen, Jiahao, Liu, Shuhan, and Zhou, Bo
- Subjects
- *
IMAGE recognition (Computer vision) , *ADIPOSE tissues , *LEAN management , *ULTRASONIC imaging , *VIDEO processing , *DEEP learning - Abstract
Simple Summary: This study presents an automated intelligent technique for real-time identification and assessment of pork belly layers in B-ultrasound images. This non-invasive method can boost the efficiency of breeders in evaluating the layer count within pork belly. By integrating the imaging features of B-ultrasound with a deep learning architecture tailored for image classification, this approach delivers high-precision recognition and categorization of pork belly strata. The findings indicated that the deep learning model adeptly delineated the boundaries between adipose and lean tissues, precisely discerning various layer counts. The system was successfully implemented in a local setting and is now primed for practical deployment. Pork belly, prized for its unique flavor and texture, is often overlooked in breeding programs that prioritize lean meat production. The quality of pork belly is determined by the number and distribution of muscle and fat layers. This study aimed to assess the number of pork belly layers using deep learning techniques. Initially, semantic segmentation was considered, but the intersection over union (IoU) scores for the segmented parts were below 70%, which is insufficient for practical application. Consequently, the focus shifted to image classification methods. Based on the number of fat and muscle layers, a dataset was categorized into three groups: three layers (n = 1811), five layers (n = 1294), and seven layers (n = 879). Drawing upon established model architectures, the initial model was refined for the task of learning and predicting layer traits from B-ultrasound images of pork belly. After a thorough evaluation of various performance metrics, the ResNet18 model emerged as the most effective, achieving a remarkable training set accuracy of 99.99% and a validation set accuracy of 96.22%, with corresponding loss values of 0.1478 and 0.1976. The robustness of the model was confirmed through three interpretable analysis methods, including grad-CAM, ensuring its reliability. Furthermore, the model was successfully deployed in a local setting to process B-ultrasound video frames in real time, consistently identifying the pork belly layer count with a confidence level exceeding 70%. By employing a scoring system with 100 points as the threshold, the number of pork belly layers in vivo was categorized into superior and inferior grades. This innovative system offers immediate decision-making support for breeding determinations and presents a highly efficient and precise method for assessment of pork belly layers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Eco-Watch Guardian AI-Enhanced Drone Patrols against Poaching.
- Author
-
D., Naren, S., Deepak, R., Abhisheik, and R., Subhashini
- Subjects
MACHINE learning ,POACHING ,WILDLIFE monitoring ,DRONE aircraft ,AUTOMOTIVE navigation systems ,ANIMAL populations - Abstract
Conservationists are searching for cutting-edge technical solutions in response to the growing problem of poaching and its catastrophic effects on animal populations. Drones and other names for unmanned aerial vehicles, have shown promise as a tool for wildlife monitoring and anti-poaching operations in recent years. Drone data can be analyzed to provide important insights into animal behavior, migration patterns, and habitat conditions, assisting in the development of more informed conservation strategies. Since the device does not hurt the species being attacked but rather causes discomfort that results in spontaneous pull it is technologically more sophisticated. In order to ensure that the volume patterns of successive frames remain coherent across time, the graph regularized is applied to them. Equipped with various navigation systems such as the GPS and optical flow, they are able to practically navigate itself thanks to today's fly-by-wire technique. Among the many possible uses for drones in tandem with other technology include soil inspections and satellite surveillance. Additionally, the integration of artificial intelligence and machine learning algorithms improves the drone's ability to identify and differentiate between poachers and legitimate visitors or researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
7. An Efficient Real-Time Recognition of Static Kannada Sign Language
- Author
-
Mohan Murali, K. S., Shet, Prasad, Chaitanya Madhav, R., Likhith, R., Mamatha, H. R., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Roy, Satyabrata, editor, Sinwar, Deepak, editor, Dey, Nilanjan, editor, Perumal, Thinagaran, editor, and R. S. Tavares, João Manuel, editor
- Published
- 2024
- Full Text
- View/download PDF
8. Real-Time Hand Gesture Recognition for American Sign Language Using CNN, Mediapipe and Convexity Approach
- Author
-
Bhatt, Vikas, Dash, Ratnakar, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Chauhan, Naveen, editor, Yadav, Divakar, editor, Verma, Gyanendra K., editor, Soni, Badal, editor, and Lara, Jorge Morato, editor
- Published
- 2024
- Full Text
- View/download PDF
9. Real-time emotion identification system using voice information
- Author
-
Riki FUKUYOSHI and Masashi NAKAYAMA
- Subjects
speech analysis ,machine learning ,acoustic feature ,emotion estimation ,real-time recognition ,Mechanical engineering and machinery ,TJ1-1570 ,Engineering machinery, tools, and implements ,TA213-215 - Abstract
Conventional speech emotion identification often uses sentence units as analysis length generally. However, human emotions frequently change their emotions instantaneously when they hear a specific word or keyword that affects each speaker’s emotion, and it is important to capture more detailed emotional expressions for recognition of the emotion. We propose an emotion identification by using acoustic features that analyze speech at each frame, which are shorter than conventional units such as sentences and phrases for capturing and expressing actual emotion. Therefore, we propose a real-time emotion identification system that uses frames as the unit of analysis for acoustic features to the emotion in units of words and morphemes, which are shorter than conventional linguistic units.
- Published
- 2024
- Full Text
- View/download PDF
10. Improved Recognition of Kurdish Sign Language Using Modified CNN.
- Author
-
Hama Rawf, Karwan Mahdi, Abdulrahman, Ayub Othman, and Mohammed, Aree Ali
- Subjects
SIGN language ,CONVOLUTIONAL neural networks ,HUFFMAN codes - Abstract
The deaf society supports Sign Language Recognition (SLR) since it is used to educate individuals in communication, education, and socialization. In this study, the results of using the modified Convolutional Neural Network (CNN) technique to develop a model for real-time Kurdish sign recognition are presented. Recognizing the Kurdish alphabet is the primary focus of this investigation. Using a variety of activation functions over several iterations, the model was trained and then used to make predictions on the KuSL2023 dataset. There are a total of 71,400 pictures in the dataset, drawn from two separate sources, representing the 34 sign languages and alphabets used by the Kurds. A large collection of real user images is used to evaluate the accuracy of the suggested strategy. A novel Kurdish Sign Language (KuSL) model for classification is presented in this research. Furthermore, the hand region must be identified in a picture with a complex backdrop, including lighting, ambience, and image color changes of varying intensities. Using a genuine public dataset, real-time classification, and personal independence while maintaining high classification accuracy, the proposed technique is an improvement over previous research on KuSL detection. The collected findings demonstrate that the performance of the proposed system offers improvements, with an average training accuracy of 99.05% for both classification and prediction models. Compared to earlier research on KuSL, these outcomes indicate very strong performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Real-time continuous handwritten trajectories recognition based on a regression-based temporal pyramid network.
- Author
-
Jian, Chengfeng, Wang, Mengqi, Ye, Min, and Zhang, Meiyu
- Abstract
In the field of dynamic gesture trajectory recognition, it is difficult to real-time recognize its semantics on the continuous handwritten trajectories because of the difficulty of trajectory segment accurately. In this paper, focuses on the semantic recognition for the handwritten trajectories of continuous numeric characters, a regression-based time pyramid network real-time recognition method is proposed. Firstly, we use corner detection algorithms to obtain the corner points of the fingers, and then construct reasonable convex functions to obtain the unique fingertip point. Then, we perform hierarchical construction of the extracted fingertip trajectory features using a time pyramid, and then aggregate the features that have undergone spatial semantic modulation and temporal rate modulation. Finally, utilizing the idea of regression detection, we predict and classify the extracted trajectory features in a specialized fully connected layer with N neural nodes. According to the experimental results, our method achieved a recognition accuracy of up to 78.87%, while also achieving a recognition speed of 32.69 fps. Our method achieves a good balance between recognition accuracy and recognition speed, which indicates that our approach has significant advantages in real-time recognition of continuous handwritten trajectories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. AI in the Sky: Developing Real-Time UAV Recognition Systems to Enhance Military Security.
- Author
-
Alzboon, Mowafaq Salem, Alqaraleh, Muhyeeddin, and Al-Batah, Mohammad Subhi
- Subjects
- *
MACHINE learning , *SUPPORT vector machines , *MILITARY surveillance , *IMAGE recognition (Computer vision) , *RANDOM forest algorithms - Abstract
In an era where Unmanned Aerial Vehicles (UAVs) have become crucial in military surveillance and operations, the need for real-time and accurate UAV recognition is increasingly critical. The widespread use of UAVs presents various security threats, requiring systems that can differentiate between UAVs and benign objects, such as birds. This study conducts a comparative analysis of advanced machine learning models to address the challenge of aerial classification in diverse environmental conditions without system redesign. Large datasets were used to train and validate models, including Neural Networks, Support Vector Machines, ensemble methods, and Random Forest Gradient Boosting Machines. These models were evaluated based on accuracy and computational efficiency, key factors for real-time application. The results indicate that Neural Networks provide the best performance, demonstrating high accuracy in distinguishing UAVs from birds. The findings emphasize that Neural Networks have significant potential to enhance operational security and improve the allocation of defense resources. Overall, this research highlights the effectiveness of machine learning in real time UAV recognition and advocates for the integration of Neural Networks into military defense systems to strengthen decision-making and security operations. Regular updates to these models are recommended to keep pace with advancements in UAV technology, including more agile and stealthier designs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. CityUPlaces: a new dataset for efficient vision-based recognition.
- Author
-
Wu, Haowei, Wu, Gengshen, Hu, Jinming, Xu, Shuaixin, Zhang, Songhao, and Liu, Yi
- Abstract
In this paper, we present a new dataset named CityUPlaces, comprising 17,771 images from various campus buildings, which contains 9 major categories and further derives 18 minor categories based on the internal and external scenes of these identities. Each category is not balanced ranging from 344 to 1539 with diverse variations in angle, attractions, views, illumination, etc. Compared to existing large-scale datasets, the proposed dataset shows its strengths in two aspects: (1) it contains a moderate number of both indoor and outdoor images under different conditions for each identity, which enables diverse real-time recognition tasks by featuring hierarchical categorization with reasonable dataset size; (2) the issue of label noise is significantly alleviated for each identity in the dedicated annotation and filtering stages to facilitate the subsequent tasks. This provides great flexibility to perform these vision-based tasks with different learning objectives in a real-time mode. Moreover, we propose a novel lightweight classification framework that outperforms state-of-the-art baselines on the dataset with the relatively low computational complexity of fewer training parameters and floating-point operations per second, by taking advantage of the involved coarse-to-fined learning strategy in a self-transfer manner. This laterally confirms the applicability of the new dataset. We also conduct experiments on the MIT Indoors and Paris datasets, where the proposed method still achieves superior performance that validates its efficacy. The dataset and code will be publicly available in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. A multi-microcontroller-based hardware for deploying Tiny machine learning model.
- Author
-
Van-Khanh Nguyen, Vy-Khang Tran, Hai Pham, Van-Muot Nguyen, Hoang-Dung Nguyen, and Chi-Ngon Nguyen
- Subjects
MACHINE learning ,MICROCONTROLLERS ,HARDWARE - Abstract
The tiny machine learning (TinyML) has been considered to apply on the edge devices where the resource-constrained micro-controller units (MCUs) were used. Finding a good platform to deploy the TinyML effectively is very crucial. The paper aims to propose a multiple micro-controller hardware platform for productively running the TinyML model. The proposed hardware consists of two dual-core MCUs. The first MCU is utilized for acquiring and processing input data, while the second one is responsible for executing the trained TinyML network. Two MCUs communicate with each other using the universal asynchronous receiver-transmitter (UART) protocol. The multitasking programming technique is mainly applied on the first MCU to optimize the pre-processing new data. A three-phase motors faults classification TinyML model was deployed on the proposed system to evaluate the effectiveness. The experimental results prove that our proposed hardware platform was improved 34.8% of the total inference time including pre-processing data of the proposed TinyML model in comparing with single micro-controller hardware platform. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Pepper leaf disease recognition based on enhanced lightweight convolutional neural networks.
- Author
-
Min Dai, Wenjing Sun, Lixing Wang, Dorjoy, Md Mehedi Hassan, Shanwen Zhang, Hong Miao, Liangxiu Han, Xin Zhang, and Mingyou Wang
- Subjects
CONVOLUTIONAL neural networks ,RECOGNITION (Psychology) - Abstract
Pepper leaf disease identification based on convolutional neural networks (CNNs) is one of the interesting research areas. However, most existing CNNbased pepper leaf disease detection models are suboptimal in terms of accuracy and computing performance. In particular, it is challenging to apply CNNs on embedded portable devices due to a large amount of computation and memory consumption for leaf disease recognition in large fields. Therefore, this paper introduces an enhanced lightweight model based on GoogLeNet architecture. The initial step involves compressing the Inception structure to reduce model parameters, leading to a remarkable enhancement in recognition speed. Furthermore, the network incorporates the spatial pyramid pooling structure to seamlessly integrate local and global features. Subsequently, the proposed improved model has been trained on the real dataset of 9183 images, containing 6 types of pepper diseases. The cross-validation results show that the model accuracy is 97.87%, which is 6% higher than that of GoogLeNet based on Inception-V1 and Inception-V3. The memory requirement of the model is only 10.3 MB, which is reduced by 52.31%-86.69%, comparing to GoogLeNet. We have also compared the model with the existing CNN-based models including AlexNet, ResNet-50 and MobileNet-V2. The result shows that the average inference time of the proposed model decreases by 61.49%, 41.78% and 23.81%, respectively. The results show that the proposed enhanced model can significantly improve performance in terms of accuracy and computing efficiency, which has potential to improve productivity in the pepper farming industry. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. A Systematic Procedure for Comparing Template-Based Gesture Recognizers
- Author
-
Ousmer, Mehdi, Sluÿters, Arthur, Magrofuoco, Nathan, Roselli, Paolo, Vanderdonckt, Jean, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Kurosu, Masaaki, editor, Yamamoto, Sakae, editor, Mori, Hirohiko, editor, Schmorrow, Dylan D., editor, Fidopiastis, Cali M., editor, Streitz, Norbert A., editor, and Konomi, Shin'ichi, editor
- Published
- 2022
- Full Text
- View/download PDF
17. Automatic vessel plate number recognition for surface unmanned vehicles with marine applications.
- Author
-
Renran Zhang, Lei Zhang, Yumin Su, Qingze Yu, and Gaoyi Bai
- Subjects
AUTOMOBILE license plates ,IMAGE recognition (Computer vision) ,RECOGNITION (Psychology) ,SHIPS ,K-means clustering - Abstract
In the practical application scenarios of USVs, it is necessary to identify a vessel in order to accomplish tasks. Considering the sensors equipped on the USV, visible images provide the fastest and most efficient way of determining the hull number. The current studies divide the task of recognizing vessel plate number into two independent subtasks: text localization in the image and its recognition. Then, researchers are focusing on improving the accuracy of localization and recognition separately. However, these methods cannot be directly applied to USVs due to the difference between these two application scenarios. In addition, as the two independent models are serial, there will be inevitable propagation of error between them, as well as an increase in time costs, resulting in a less satisfactory performance. In view of the above, we proposed a method based on object detection model for recognizing vessel plate number in complicated sea environments applied to USVs. The accuracy and stability of model have been promoted by recursive gated convolution structure, decoupled head, reconstructing loss function, and redesigning the sizes of anchor boxes. To facilitate this research, a vessel plate number dataset is established in this paper. Furthermore, we conducted a experiment utilizing a USV platform in the South China Sea. Compared with the original YOLOv5, themAP (mean Average Precision) value of proposed method is increased by 6.23%. The method is employed on the "Tian Xing" USV platform and the experiment results indicates both the ship and vessel plate number can be recognized in real-time. In both the civilian andmilitary sectors, this has a great deal of significance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. A defect detection method for Akidzuki pears based on computer vision and deep learning.
- Author
-
Wang, Baoya, Hua, Jin, Xia, Lianming, Lu, Fangyuan, Sun, Xia, Guo, Yemin, and Su, Dianbin
- Subjects
- *
COMPUTER vision , *PEARS , *IMAGING systems , *COMPUTER systems , *DEEP learning , *NECK - Abstract
To quickly and accurately detect defects in Akidzuki (Pyrus pyrifolia Nakai) pears after harvest, this study aims to develop a method for Akidzuki pear defect detection based on computer vision and deep learning models. It mainly consists of obtaining high-quality images using an image acquisition system and proposing a new YOLO-AP detection model to identify defects in Akidzuki pears. The model uses YOLOv5 as the main architecture. The GhostDynamicConv (GDC) module is obtained by replacing the standard convolution in the Ghost module with a dynamic convolution. The C3-GhostDynamicConv (C3-GDC) module is obtained by replacing the Bottleneck module of C3 in Neck with the GDC module, simplifying the network while improving the model's accuracy. Meanwhile, Bottleneck Attention Module (BAM) is introduced after C3-GDC to refine the intermediate features. In addition, the original bounding box loss function is replaced with Wise-IoUv3 (WIoUv3) to accelerate the model convergence. The results demonstrate that YOLO-AP performs better in Akidzuki pear defect detection, with a mAP@0.5 of 0.939, a recall of 0.921, and a detection speed of 454.5 fps (2.2 ms per image). These values represent a 4.2 %, 3.5 %, and 1.4 % improvement over the baseline model. Comparing YOLO-AP with the updated YOLOv9 and other detection models, YOLO-AP is more accurate and faster. These findings demonstrated that the proposed method can detect Akidzuki pear defects in real time, efficiently and accurately, providing technical support for post-harvest defect detection. • A computer vision system was constructed for Akidzuki pear RGB image acquisition. • An overall accuracy of 93.9 % was obtained using the proposed YOLO-AP model. • Akidzuki pears are distinguished according to the type of defect. • The proposed method is more promising than other target detection methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Improved Recognition of Kurdish Sign Language Using Modified CNN
- Author
-
Karwan Mahdi Hama Rawf, Ayub Othman Abdulrahman, and Aree Ali Mohammed
- Subjects
sign language recognition (SLR) ,Kurdish alphabet ,Kurdish sign language (KuSL) ,real-time recognition ,CNN ,gesture recognition hand shape ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The deaf society supports Sign Language Recognition (SLR) since it is used to educate individuals in communication, education, and socialization. In this study, the results of using the modified Convolutional Neural Network (CNN) technique to develop a model for real-time Kurdish sign recognition are presented. Recognizing the Kurdish alphabet is the primary focus of this investigation. Using a variety of activation functions over several iterations, the model was trained and then used to make predictions on the KuSL2023 dataset. There are a total of 71,400 pictures in the dataset, drawn from two separate sources, representing the 34 sign languages and alphabets used by the Kurds. A large collection of real user images is used to evaluate the accuracy of the suggested strategy. A novel Kurdish Sign Language (KuSL) model for classification is presented in this research. Furthermore, the hand region must be identified in a picture with a complex backdrop, including lighting, ambience, and image color changes of varying intensities. Using a genuine public dataset, real-time classification, and personal independence while maintaining high classification accuracy, the proposed technique is an improvement over previous research on KuSL detection. The collected findings demonstrate that the performance of the proposed system offers improvements, with an average training accuracy of 99.05% for both classification and prediction models. Compared to earlier research on KuSL, these outcomes indicate very strong performance.
- Published
- 2024
- Full Text
- View/download PDF
20. Improvement in Error Recognition of Real-Time Football Images by an Object-Augmented AI Model for Similar Objects.
- Author
-
Han, Junsu, Kang, Kiho, and Kim, Jongwon
- Subjects
ARTIFICIAL intelligence ,IMAGE recognition (Computer vision) ,SOCCER - Abstract
In this paper, we analyze the recognition error of the general AI recognition model and propose the structure-modified and object-augmented AI recognition model. This model has object detection features that distinguish specific target objects in target areas where players with similar shapes and characteristics overlapped in real-time football images. We implemented the AI recognition model by reinforcing the training dataset and augmenting the object class. In addition, it is shown that the recognition rate increased by modifying the model structure based on the recognition errors analysis of general AI recognition models. Recognition errors were decreased by applying the modules of HSV processing and differentiated classes (overlapped player groups) learning to the general AI recognition model. We experimented in order to compare the recognition error reducing performance with the general AI model and the proposed AI model by the same real-time football images. Through this study, we have confirmed that the proposed model can reduce errors. It showed that the proposed AI model structure to recognize similar objects in real-time and in various environments could be used to analyze football games. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Image Pre-processing and Segmentation for Real-Time Subsea Corrosion Inspection
- Author
-
Pirie, Craig, Moreno-Garcia, Carlos Francisco, Angelov, Plamen, Series Editor, Kozma, Robert, Series Editor, Iliadis, Lazaros, editor, Macintyre, John, editor, Jayne, Chrisina, editor, and Pimenidis, Elias, editor
- Published
- 2021
- Full Text
- View/download PDF
22. Determining Optimal Frame Processing Strategies for Real-Time Document Recognition Systems
- Author
-
Bulatov, Konstantin, Arlazarov, Vladimir V., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lladós, Josep, editor, Lopresti, Daniel, editor, and Uchida, Seiichi, editor
- Published
- 2021
- Full Text
- View/download PDF
23. Real-Time Recognition of Motor Vehicle Whistle with Convolutional Neural Network
- Author
-
Yan, Ming, Wang, Chaoli, Shen, Song, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martin, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Jia, Yingmin, editor, Du, Junping, editor, and Zhang, Weicun, editor
- Published
- 2020
- Full Text
- View/download PDF
24. Recognition of grape leaf diseases using MobileNetV3 and deep transfer learning.
- Author
-
Xiang Yin, Wenhua Li, Zhen Li, and Lili Yi
- Subjects
- *
DEEP learning , *GRAPE industry , *DATA augmentation , *DISTRIBUTION (Probability theory) , *INFECTIOUS disease transmission , *GRAPES - Abstract
Timely diagnosis and accurate identification of grape leaf diseases are decisive for controlling the spread of disease and ensuring the healthy development of the grape industry. The objective of this research was to propose a simple and efficient approach to improve grape leaf disease identification accuracy with limited computing resources and scale of training image dataset based on deep transfer learning and an improved MobileNetV3 model (GLD-DTL). A pre-training model was obtained by training MobileNetV3 using the ImageNet dataset to extract common features of the grape leaves. And the last convolution layer of the pre-training model was modified by adding a batch normalization function. A dropout layer followed by a fully connected layer was used to improve the generalization ability of the pre-training model and realize a weight matrix to quantify the scores of six diseases, according to which the Softmax method was added as the top layer of the modified networks to give probability distribution of six diseases. Finally, the grape leaf diseases dataset, which was constructed by processing the image with data augmentation and image annotation technologies, was input into the modified networks to retrain the networks to obtain the grape leaf diseases recognition (GLDR) model. Results showed that the proposed GLD-DTL approach had better performance than some recent approaches. The identification accuracy was as high as 99.84% while the model size was as small as 30 MB. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. On-board Training Strategy for IMU-Based Real-Time Locomotion Recognition of Transtibial Amputees With Robotic Prostheses
- Author
-
Dongfang Xu and Qining Wang
- Subjects
robotic transtibial prosthesis ,inertial measurement unit ,on-board training ,real-time recognition ,human-machine interaction ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
The paper puts forward an on-board strategy for a training model and develops a real-time human locomotion mode recognition study based on a trained model utilizing two inertial measurement units (IMUs) of robotic transtibial prosthesis. Three transtibial amputees were recruited as subjects in this study to finish five locomotion modes (level ground walking, stair ascending, stair descending, ramp ascending, and ramp descending) with robotic prostheses. An interaction interface was designed to collect sensors' data and instruct to train model and recognition. In this study, the analysis of variance ratio (no more than 0.05) reflects the good repeatability of gait. The on-board training time for SVM (Support Vector Machines), QDA (Quadratic Discriminant Analysis), and LDA (Linear discriminant analysis) are 89, 25, and 10 s based on a 10,000 × 80 training data set, respectively. It costs about 13.4, 5.36, and 0.067 ms for SVM, QDA, and LDA for each recognition process. Taking the recognition accuracy of some previous studies and time consumption into consideration, we choose QDA for real-time recognition study. The real-time recognition accuracies are 97.19 ± 0.36% based on QDA, and we can achieve more than 95% recognition accuracy for each locomotion mode. The receiver operating characteristic also shows the good quality of QDA classifiers. This study provides a preliminary interaction design for human–machine prosthetics in future clinical application. This study just adopts two IMUs not multi-type sensors fusion to improve the integration and wearing convenience, and it maintains comparable recognition accuracy with multi-type sensors fusion at the same time.
- Published
- 2020
- Full Text
- View/download PDF
26. Two viewpoints based real‐time recognition for hand gestures.
- Author
-
Krishan Kumar, Amit, Kaushal Kumar, Abhishek, and Guo, Shuli
- Abstract
It is extremely challenging to accomplish excellent accuracy for gesture recognition using an approach where complexity in computation time for recognition is less. This study compares accuracy in hand gesture recognition of a single viewpoint set‐up with proposed two viewpoint set‐up for different classification techniques. The efficacy of the presented approach is verified practically with various image processing, feature extraction and classification techniques. Two camera system make geometry learning and three‐dimensional (3D) view feasible compared to a single camera system. Geometrical features from additional viewpoint contribute to 3D view estimation of the hand gesture. It also improves the classification accuracy. Experimental results demonstrate that the proposed method show escalation in recognition rate compared to the single‐camera system, and also has great performance using simple classifiers like the nearest neighbour and decision tree. Classification within 1 s is considered as real‐time in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. On-board Training Strategy for IMU-Based Real-Time Locomotion Recognition of Transtibial Amputees With Robotic Prostheses.
- Author
-
Xu, Dongfang and Wang, Qining
- Subjects
FISHER discriminant analysis ,HUMAN locomotion ,RECEIVER operating characteristic curves ,SUPPORT vector machines ,PROSTHETICS ,ARTIFICIAL arms ,ARTIFICIAL limbs - Abstract
The paper puts forward an on-board strategy for a training model and develops a real-time human locomotion mode recognition study based on a trained model utilizing two inertial measurement units (IMUs) of robotic transtibial prosthesis. Three transtibial amputees were recruited as subjects in this study to finish five locomotion modes (level ground walking, stair ascending, stair descending, ramp ascending, and ramp descending) with robotic prostheses. An interaction interface was designed to collect sensors' data and instruct to train model and recognition. In this study, the analysis of variance ratio (no more than 0.05) reflects the good repeatability of gait. The on-board training time for SVM (Support Vector Machines), QDA (Quadratic Discriminant Analysis), and LDA (Linear discriminant analysis) are 89, 25, and 10 s based on a 10,000 × 80 training data set, respectively. It costs about 13.4, 5.36, and 0.067 ms for SVM, QDA, and LDA for each recognition process. Taking the recognition accuracy of some previous studies and time consumption into consideration, we choose QDA for real-time recognition study. The real-time recognition accuracies are 97.19 ± 0.36% based on QDA, and we can achieve more than 95% recognition accuracy for each locomotion mode. The receiver operating characteristic also shows the good quality of QDA classifiers. This study provides a preliminary interaction design for human–machine prosthetics in future clinical application. This study just adopts two IMUs not multi-type sensors fusion to improve the integration and wearing convenience, and it maintains comparable recognition accuracy with multi-type sensors fusion at the same time. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. A real-time walking pattern recognition method for soft knee power assist wear.
- Author
-
Wenkang Wang, Liancun Zhang, Juan Liu, Bainan Zhang, and Qiang Huang
- Subjects
PATTERN recognition systems ,LEG ,UNITS of measurement ,CLASSIFICATION algorithms ,OBJECT recognition (Computer vision) - Abstract
Real-time recognition of walking-related activities is an important function that lower extremity assistive devices should possess. This article presents a real-time walking pattern recognition method for soft knee power assist wear. The recognition method employs the rotation angles of thighs and shanks as well as the knee joint angles collected by the inertial measurement units as input signals and adopts the rule-based classification algorithm to achieve the real-time recognition of three most common walking patterns, that is, level-ground walking, stair ascent, and stair descent. To evaluate the recognition performance, 18 subjects are recruited in the experiments. During the experiments, subjects wear the knee power assist wear and carry out a series of walking activities in an out-of-lab scenario. The results show that the average recognition accuracy of three walking patterns reaches 98.2%, and the average recognition delay of all transitions is slightly less than one step. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Video-Based Pedestrian Detection on Mobile Phones with the Cascade Classifiers
- Author
-
Shipova, Ksenia G., Savchenko, Andrey V., Kalyagin, Valery A., editor, Koldanov, Petr A., editor, and Pardalos, Panos M., editor
- Published
- 2016
- Full Text
- View/download PDF
30. Fast search real‐time face recognition based on DCT coefficients distribution.
- Author
-
Hsia, Shih‐Chang, Wang, Szu‐Hong, and Chen, Chia‐Jung
- Abstract
The authors propose an adaptive face recognition algorithm based on the discrete cosine transform (DCT) coefficients approach. For the database's establishment, the face images are pre‐processed with colour transform, hair cutting, and background removing to eliminate non‐face information. The recognised kernel applied the weights of DCT coefficient distribution with the entire image transformation, to avoid position mismatch and reduce the light effect. The key coefficients of DCT are chosen from the training database by maximum variance. The fast search mode can reject 90% weak candidates with few coefficients to fasten the processing speed. The significant coefficients weighting methods are used to enhance face features. Only using 50 coefficients per picture, the recognition rate can achieve 95% for ORL face database testing. For real‐time recognition, camera imaging is processed with algorithms using C‐programming based on Windows system. The recognition rate can achieve 95% and the speed is about nine frames per second for real‐time recognition in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. BPNN-Based Real-Time Recognition of Locomotion Modes for an Active Pelvis Orthosis with Different Assistive Strategies.
- Author
-
Gong, Cheng, Xu, Dongfang, Zhou, Zhihao, Vitiello, Nicola, and Wang, Qining
- Subjects
ROBOTIC exoskeletons ,PELVIS ,UNITS of measurement ,REAL-time control ,ARTIFICIAL knees - Abstract
Real-time human intent recognition is important for controlling low-limb wearable robots. In this paper, to achieve continuous and precise recognition results on different terrains, we propose a real-time training and recognition method for six locomotion modes including standing, level ground walking, ramp ascending, ramp descending, stair ascending and stair descending. A locomotion recognition system is designed for the real-time recognition purpose with an embedded BPNN-based algorithm. A wearable powered orthosis integrated with this system and two inertial measurement units is used as the experimental setup to evaluate the performance of the designed method while providing hip assistance. Experiments including on-board training and real-time recognition parts are carried out on three able-bodied subjects. The overall recognition accuracies of six locomotion modes based on subject-dependent models are 98.43% and 98.03% respectively, with the wearable orthosis in two different assistance strategies. The cost time of recognition decision delivered to the orthosis is about 0.9 ms. Experimental results show an effective and promising performance of the proposed method to realize real-time training and recognition for future control of low-limb wearable robots assisting users on different terrains. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Face recognition algorithm based on feature descriptor and weighted linear sparse representation.
- Author
-
Liao, Mengmeng and Gu, Xiaodong
- Abstract
Generally, the commonly used sparse‐based methods, such as sparse representation classifier, have achieved a good recognition result in face recognition. However, there exist several problems in those methods. First, those methods think that the importance of each atom is the same in representing other query samples. This is not reasonable because different atoms contain different amounts of information, their importance should be different when they together represent the query samples. Second, those methods cannot meet the real‐time requirement when dealing with large data set. In this study, on the one hand, the authors propose a fast extended sparse‐weighted representation classifier (FESWRC) by considering the different importance of atoms and using primal augmented Lagrangian method as well as principal component analysis. On the other hand, the authors propose a distinctive feature descriptor, named logarithmic‐weighted sum (LWS) feature descriptor. The authors combine FESWRC and LWS and used for face recognition, this method is called face recognition algorithm based on feature descriptor and weighted linear sparse representation (FDWLSR). Experimental results show that FDWLSR can realise real‐time recognition and the recognition rate can achieve 100.0, 100.0, 91.6, 93.4 and 87.4%, respectively, on the Yale, Olivetti Research Laboratory (ORL), faculdade de engenharia industrial (FEI), face recognition technology program (FERET) and labelled face in the wild datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. Real‐time recovery and recognition of motion blurry QR code image based on fractional order deblurring method.
- Author
-
Yu, XiaoYuan and Xie, Wei
- Abstract
Most image deblurring methods require large algorithm computational cost because multi‐scale blind deconvolution is used for estimating kernel. Furthermore, a moving quick response (QR) code image is regarded as a type of classical blurry image and requires real‐time processing in practical applications. Therefore, this study proposes a new framework of motion blurry QR code image restoration in real‐time based on the fractional‐order deblurring method. The authors perform a trade‐off between algorithm computational cost and quality of the deblurring image. First, a black frame is added around the traditional QR code, which is used for locating QR code and reducing the computational cost. Next, a new image deblurring method is proposed using fractional differential order and is used for improving the quality of the deblurring image. Furthermore, an average grey‐level method is presented to reconstruct the standard QR code images. Comparisons with the existing algorithms demonstrate that the proposed method can achieve favourable deblurring quality and acceptable computational cost. Finally, their framework is validated in a practical platform of an actual conveyor belt system with a low‐cost industrial camera. Experimental results indicate that their framework performs favourably with processing motion blurry QR code images. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. SliceNet: A proficient model for real-time 3D shape-based recognition.
- Author
-
Chen, Xuzhan, Chen, Youping, Gupta, Kashish, Zhou, Jie, and Najjaran, Homayoun
- Subjects
- *
REAL-time computing , *THREE-dimensional imaging , *PATTERN recognition systems , *OBJECT recognition (Computer vision) , *RANDOM variables - Abstract
Abstract The field of 3D object recognition has been dominated by 2D view-based methods mostly because of lower accuracy and larger computational load of 3D shape-based methods. Recognition with a 3D shape yields appreciable advantages e.g., making use of depth information and independence to ambient lighting, but we are still away from an eminent solution for 3D shape-based object recognition. In this paper first, a statistical method capable of modeling the input and output with random variables is used to investigate the reasons contributing to the inferior performance of the 3D convolution operation. The analysis suggests that the excessive size of the kernel causes the dramatic blowing up of the output variance of the 3D convolution operation and makes the output feature less discriminating. Then, based on the results of this analysis and inspired by the underlying principle of 3D shapes, SliceNet is proposed to learn 3D shape features using anisotropic 3D convolution. Specifically, the proposed method learns features from original 2D planar sketches comprising the 3D shape and has a significantly lower output variance. Experiments on ModelNet show that the recognition accuracy of the proposed SliceNet is comparable to well-established 2D view-based methods. Besides, the SliceNet also has a significantly smaller model size, simpler architecture, less training and inference time compared to 2D view-based and other 3D object recognition methods. An experiment with real-world data shows that the model trained on CAD files can be generalized to real-world objects without any re-training or fine-tuning. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. Comparison of Results Obtained Using Brain-Computer Interface Classifiers in a Motor Imagery Recognition Task.
- Author
-
Oganesyan, V. V., Agapov, S. N., Bulanov, V. A., and Biryukova, E. V.
- Subjects
MOTOR imagery (Cognition) ,BRAIN-computer interfaces ,PATTERN perception - Abstract
This article compares a wide set of data classification methods used for creating brain-computer interfaces based on the recognition of EEG patterns during motor imagery of the hand. The GBM (gradient boosting models) classifier was found to work better than other classifiers using the dataset provided. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. RapidHARe: A computationally inexpensive method for real-time human activity recognition from wearable sensors.
- Author
-
Chereshnev, Roman and Kertész-Farkas, Attila
- Subjects
HUMAN activity recognition ,CENTRAL processing units ,ENERGY consumption ,ALGORITHMS ,ARTIFICIAL neural networks ,HIDDEN Markov models - Abstract
Recent human activity recognition (HAR) methods, based on on-body inertial sensors, have achieved increasing performance; however, this is at the expense of longer CPU calculations and greater energy consumption. Therefore, these complex models might not be suitable for real-time prediction in mobile systems, e.g., in elder-care support and long-term health-monitoring systems. Here, we present a new method called RapidHARe for real-time human activity recognition based on modeling the distribution of a raw data in a half-second context window using dynamic Bayesian networks. Our method does not employ any dynamic-programming-based algorithms, which are notoriously slow for inference, nor does it employ feature extraction or selection methods. In our comparative tests, we show that RapidHARe is an extremely fast predictor, one and a half times faster than artificial neural networks (ANNs) methods, and more than eight times faster than recurrent neural networks (RNNs) and hidden Markov models (HMMs). Moreover, in performance, RapidHare achieves an F1 score of 94.27% and accuracy of 98.94%, and when compared to ANN, RNN, HMM, it reduces the F1-score error rate by 45%, 65%, and 63% and the accuracy error rate by 41%, 55%, and 62%, respectively. Therefore, RapidHARe is suitable for real-time recognition in mobile devices. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Real-time cutting tool state recognition approach based on machining features in NC machining process of complex structural parts.
- Author
-
Liu, Changqing, Li, Yingguang, Hua, Jiaqi, Lu, Nanhong, and Mou, Wenping
- Subjects
- *
CUTTING tools , *MACHINING , *K-means clustering , *BATCH processing , *MANUFACTURING processes - Abstract
Cutting tool state recognition plays an important role in ensuring the quality and efficiency of NC machining of complex structural parts, and it is quite especial and challengeable for complex structural parts with single-piece or small-batch production. In order to address this issue, this paper presents a real-time recognition approach of cutting tool state based on machining features. The sensitive parameters of monitored cutting force signals for different machining features are automatically extracted, and are associated with machining features in real time. A K-Means clustering algorithm is used to automatically classify the cutting tool states based on machining features, where the sensitive parameters of the monitoring signals together with the geometric and process information of machining features are used to construct the input vector of the K-Means clustering model. The experiment results show that the accuracy of the approach is above 95% and the approach can solve the real-time recognition of cutting tool states for complex structural parts with single-piece and small-batch production. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Real-Time Recognition of Human Daily Motion with Smartphone Sensor.
- Author
-
Qishou Xia, Xiaoling Yin, Juan He, and Feng Chen
- Subjects
SMARTPHONES ,DETECTORS ,ALGORITHMS ,HUMAN activity recognition ,HUMAN mechanics - Abstract
Aiming at problems regarding the recognition of motion states by existing smartphones, such as poor real-time performance, less movement category and complex algorithm, this paper proposes a method of using smartphone sensors to recognize six kinds of real time human movement states. Firstly, daily human movement data is acquired through smartphone acceleration sensors and gravitational acceleration sensors, and original data is handled with correction, smoothing, segmentation and direction-independent processing. Secondly, the footsteps identification algorithm is used to calculate peaks and troughs of footsteps from which the time-domain feature vectors are extracted. Finally, the movement states are classified according to feature vectors, and the Hierarchical Support Machines (HSVMs) is used to recognize daily movement states. Experimental results show this method can effectively reduce the computational load of smartphones and improve real-time performance and accuracy of movement states recognition. This method is suitable for other similar behavior recognitions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. A Real-Time Specific Weed Recognition System by Measuring Weeds Density through Mask Operation
- Author
-
Ahmed, Imran, Ahmad, Zaheer, Islam, Muhammad, Adnan, Awais, and Elleithy, Khaled, editor
- Published
- 2008
- Full Text
- View/download PDF
40. Improvement in Error Recognition of Real-Time Football Images by an Object-Augmented AI Model for Similar Objects
- Author
-
Junsu Han, Kiho Kang, and Jongwon Kim
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,object recognition ,real-time recognition ,object augmentation ,improvement in error recognition ,structure of AI models ,Electrical and Electronic Engineering - Abstract
In this paper, we analyze the recognition error of the general AI recognition model and propose the structure-modified and object-augmented AI recognition model. This model has object detection features that distinguish specific target objects in target areas where players with similar shapes and characteristics overlapped in real-time football images. We implemented the AI recognition model by reinforcing the training dataset and augmenting the object class. In addition, it is shown that the recognition rate increased by modifying the model structure based on the recognition errors analysis of general AI recognition models. Recognition errors were decreased by applying the modules of HSV processing and differentiated classes (overlapped player groups) learning to the general AI recognition model. We experimented in order to compare the recognition error reducing performance with the general AI model and the proposed AI model by the same real-time football images. Through this study, we have confirmed that the proposed model can reduce errors. It showed that the proposed AI model structure to recognize similar objects in real-time and in various environments could be used to analyze football games.
- Published
- 2022
- Full Text
- View/download PDF
41. Real-time microreaction recognition system
- Author
-
Yi-Chang Wu, Yao-Cheng Liu, and Ru-Yi Huang
- Subjects
Microreaction ,Deep learning ,Real-time recognition ,Building and Construction ,Electrical and Electronic Engineering - Abstract
This study constructed a real-time microreaction recognition system that can give real-time assistance to investigators. Test results indicated that the number of frames per second (30 or 190); angle of the camera, namely the front view of the interviewee or left (+45°) or right (−45°) view; and image resolution (480 or 680 p) did not have major effects on the system’s recognition ability. However, when the camera was placed at a distance of 300 cm, recognition did not always succeed. Value changes were larger when the camera was placed at an elevation 45° than when it was placed directly in front of the person being interrogated. Within a specific distance, the recognition results of the proposed real-time microreaction recognition system concurred with the six reaction case videos. In practice, only the distance and height of the camera must be adjusted in the real-time microreaction recognition system.
- Published
- 2023
- Full Text
- View/download PDF
42. Microexpression recognition robot
- Author
-
Yi-Chang Wu, Yao-Cheng Liu, Chieh Tsao, and Ru-Yi Huang
- Subjects
Microexpression ,Artificial intelligence ,Deception detection ,Real-time recognition ,Building and Construction ,Electrical and Electronic Engineering - Abstract
Following the development of big data, the use of microexpression technology has become increasingly popular. The application of microexpressions has expanded beyond medical treatment to include scientific case investigations. Because microexpressions are characterized by short duration and low intensity, training humans to recognize their yields poor performance results. Automatically recognizing microexpressions by using machine learning techniques can provide more effective results and save time and resources. In the real world, to avoid judicial punishment, people lie and conceal the truth for a variety of reasons. In this study, our primary objective was to develop a system for real-time microexpression recognition. We used FaceReader as the retrieval system and integrated the data with an application programming interface to provide recognition results as objective references in real-time. Using an experimental analysis, we also attempted to determine the optimal system configuration conditions. In conclusion, the use of artificial intelligence is expected to enhance the efficiency of investigations.
- Published
- 2023
- Full Text
- View/download PDF
43. Development of a software real-time system for recognition of multicopters
- Subjects
глÑбокое обÑÑение ,изобÑажение ,ÑаÑпознавание в ÑеалÑном вÑемени ,real-time recognition ,компÑÑÑеÑное зÑение ,deep learning ,алгоÑиÑÐ¼Ñ Ð¾Ð±ÑабоÑки изобÑажений ,image ,image processing algorithms ,computer vision - Abstract
ÐÐ°Ð½Ð½Ð°Ñ ÑабоÑа поÑвÑÑена ÑазÑабоÑке пÑогÑаммной ÑиÑÑÐµÐ¼Ñ ÑеалÑного вÑемени Ð´Ð»Ñ ÑаÑÐ¿Ð¾Ð·Ð½Ð°Ð²Ð°Ð½Ð¸Ñ Ð¼ÑлÑÑикопÑеÑов. ÐÐ»Ñ ÑеÑÐµÐ½Ð¸Ñ Ð´Ð°Ð½Ð½Ð¾Ð¹ задаÑи бÑло ÑеÑено ÑоздаÑÑ ÑиÑÑÐµÐ¼Ñ ÐºÐ¾Ð¼Ð¿ÑÑÑеÑного зÑÐµÐ½Ð¸Ñ Ñ Ð¸ÑполÑзованием ÑÐµÑ Ð½Ð¾Ð»Ð¾Ð³Ð¸Ð¸ CUDA, CUDNN; библиоÑек OPENCV, OPENMP, darknet; нейÑоÑеÑи Ñ Ð°ÑÑ Ð¸ÑекÑÑÑой YOLO и обÑÑаÑÑей вÑбоÑки Ñ Ð±Ð¾Ð»ÐµÐµ 2000 изобÑажений. РÑиÑÑеме компÑÑÑеÑного зÑÐµÐ½Ð¸Ñ Ð²ÑполнÑÑÑÑÑ Ð¿ÑедобÑабоÑка изобÑÐ°Ð¶ÐµÐ½Ð¸Ñ - над Ð²Ñ Ð¾Ð´Ð½Ñм изобÑажением вÑполнÑÑÑÑÑ Ð¿ÑеобÑазование повÑÑÐµÐ½Ð¸Ñ ÐºÐ¾Ð½ÑÑаÑÑноÑÑи (вÑÑавнивание гиÑÑогÑамм). ÐоÑле ÑÑого нейÑÐ¾Ð½Ð½Ð°Ñ ÑеÑÑ Ð²ÑполнÑÐµÑ Ð¾Ð±ÑабоÑÐºÑ Ð¿Ð¾Ð»ÑÑивÑегоÑÑ Ð¸Ð·Ð¾Ð±ÑажениÑ, ÑезÑлÑÑаÑом ÑвлÑеÑÑÑ Ð¼Ð°ÑÑиÑа, ÑоÑÑоÑÑÐ°Ñ Ð¸Ð· кооÑÐ´Ð¸Ð½Ð°Ñ Ð¿ÑÑмоÑголÑников и веÑоÑÑноÑÑей Ð½Ð°Ñ Ð¾Ð¶Ð´ÐµÐ½Ð¸Ñ Ð² ÑооÑвеÑÑÑвÑÑÑей облаÑÑи иÑкомого обÑекÑа. Ðалее вÑполнÑеÑÑÑ Ð¿Ð¾ÑÑобÑабоÑка â оÑбÑаÑÑваÑÑÑÑ ÑезÑлÑÑаÑÑ Ñ Ð²ÐµÑоÑÑноÑÑÑÑ Ð½Ð¸Ð¶Ðµ поÑоговой, вÑводÑÑÑÑ ÑезÑлÑÑаÑÑ. ÐеÑÑ ÐºÐ¾Ð´ напиÑан на C, C++. Ðо Ñ Ð¾Ð´Ðµ ÑабоÑÑ ÑдалоÑÑ ÑоздаÑÑ ÑиÑÑÐµÐ¼Ñ ÐºÐ¾Ð¼Ð¿ÑÑÑеÑного зÑениÑ, коÑоÑÐ°Ñ Ð¿Ð¾Ñле 4000 иÑеÑаÑий на обÑÑаÑÑей вÑбоÑке (одна иÑеÑаÑÐ¸Ñ = 64 изобÑажениÑ) Ñмогла доÑÑигнÑÑÑ ÑоÑноÑÑи обнаÑÑÐ¶ÐµÐ½Ð¸Ñ Ð±Ð¾Ð»ÐµÐµ 80 %. РезÑлÑÑаÑÑ Ð´Ð°Ð½Ð½Ð¾Ð¹ ÑабоÑÑ, а именно аÑÑ Ð¸ÑекÑÑÑÑ Ð¸ Ñайл веÑов можно иÑполÑзоваÑÑ Ð´Ð»Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ ÑиÑÑем обнаÑÑÐ¶ÐµÐ½Ð¸Ñ Ð¼ÑлÑÑикопÑеÑов в ÑеалÑном вÑемени в виде декÑÑопного или мобилÑного пÑиложениÑ, облаÑного ÑеÑвиÑа или вÑÑÑаиваемой ÑиÑÑема, а Ñакже ÑÑи веÑа можно иÑполÑзоваÑÑ Ð´Ð»Ñ Ð´Ð°Ð»ÑнейÑего обÑÑÐµÐ½Ð¸Ñ ÑÑой нейÑоÑеÑи на дÑÑгой вÑбоÑке., This work is devoted to the development of a real-time software system for recognizing multicopters. To solve this problem, it was decided to create a computer vision system using CUDA, CUDNN technology; OPENCV, OPENMP, darknet libraries; neural networks with YOLO architecture and training dataset with more than 2000 images. In the computer vision system, image preprocessing is performed - a contrast enhancement transformation (histogram equalization) is performed over the input image. After that, the neural network processes the resulting image, the result is a matrix consisting of the coordinates of the rectangles and the probabilities of finding the desired object in the corresponding area. Next, post-processing is performed - results with a probability below the threshold are discarded, the results are output. All the code is written in C, C++. In the course of the work, it was possible to create a computer vision system that, after 4,000 iterations on a training sample (one iteration = 64 images), was able to achieve detection accuracy of more than 80%. The results of this work, namely the architecture and the weights file, can be used to create multicopter detection systems in real time in the form of a desktop or mobile application, cloud service or embedded system, and these weights can also be used for further training of this neural network on another dataset.
- Published
- 2022
- Full Text
- View/download PDF
44. Text/shape classifier for mobile applications with handwriting input.
- Author
-
Degtyarenko, Illya, Radyvonenko, Olga, Bokhan, Kostiantyn, and Khomenko, Viacheslav
- Abstract
The paper provides a practical solution to a real-time text/shape differentiation problem for online handwriting input. The proposed structure of the classification system comprises stroke grouping and stroke classification blocks. A new set of features is derived that has low computational complexity. The method achieves 98.5 % text/shape classification accuracy on a benchmark dataset. The proposed stroke grouping machine learning approach improves classification robustness in relation to different input styles. In contrast to the threshold-based techniques, this grouping adaptation enhances the overall discriminating accuracy of the text/shape recognition system by 11.3 %. The solution improves system's response on a touch-screen device. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
45. Real-Time Drilling Strategy for Planetary Sampling: Method and Validation.
- Author
-
Junyue Tang, Zongquan Deng, Qiquan Quan, and Shengyuan Jiang
- Subjects
- *
ROVING vehicles (Astronautics) , *REAL-time control , *REGOLITH , *SUPPORT vector machines , *PATTERN recognition systems - Abstract
Drilling and coring, due to their efficient penetrating and cutting removal characteristics, have been widely applied to planetary sampling and returning missions. In most autonomous planetary drilling, there are not enough prior seismic surveys on sampling sites' geological information. Sampling drills may encounter uncertain formations of significant differences in mechanical properties. Additionally, given limited orbital resources, sampling drills may have a stuck fault under inappropriate drilling parameters. Hence, it is necessary to develop a real-time drilling strategy that can recognize current drilling conditions effectively and switch to appropriate drilling parameters correspondingly. A concept of planetary regolith drillability based on the rate of penetration (RoP) is proposed to evaluate the difficulty of the drilling process. By classifying different drilling media into several drillability levels, the difficulty level of drilling conditions can be easily acquired. A pattern recognition method of support vector machines (SVMs) is adopted to recognize drillability levels. Next, a set of suitable drilling parameters is tuned online to match the recognized drilling conditions. A multilayered simulant drilling test indicates that this drilling strategy based on drillability recognition can identify different drilling conditions accurately and have good environmental adaptability. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
46. Detection of fish frauds (basa catfish and sole fish) via iKnife rapid evaporative ionization mass spectrometry: An in situ and real-time analytical method.
- Author
-
Shen, Qing, Lu, Weibo, Cui, Yiwei, Ge, Lijun, Li, Yunyan, Wang, Shitong, Wang, Pingya, Zhao, Qiaoling, Wang, Haixing, and Chen, Jian
- Subjects
- *
MASS spectrometry , *FRAUD investigation , *CATFISHES , *MARINE resources , *FISH fillets - Abstract
The fish fillets of basa catfish (Pangasius bocourti) and sole fish (Cynoglossus semilaevis Gunther) have similar appearance but different market values, leading to the possibility of adulteration for economic gains. In this study, a surgical knife, namely iKnife, was coupled to rapid evaporative ionization mass spectrometry (REIMS) to develop a high-throughput technique for in situ and real-time authenticating these two fish species. The iKnife generated informative aerosol, which was further aspired into mass spectrometry to acquire specific fingerprinting profiles. The spectra were multivariate statistical analyzed, and the most contributing ions responsible for the significant difference between the lipidomics profiles of basa catfish and sole fish were screened out. For instance, ions of m/z 281.2470, 279.2319, 699.4970, etc. were abundance in basa catfish, while m/z 301.2164, 327.2341, 764.5284, etc. were significant in sole fish. Subsequently, the validation of the method was carried out in terms of intra-day precision (RSD 5.06%–8.65%) and inter-day precision (RSD ≤9.17%). Finally, this established iKnife-REIMS method was successfully applied to blind samples from markets with an accuracy greater than 98%. The results indicated that the proposed iKnife REIMS method could be used to unambiguously authenticate these two easily adulterated basa catfish and sole fish, and provide the basis and potential for the high-throughput identification of marine resources on the market. • REIMS method was developed for real-time authenticating basa catfish and sole fish. • The samples were analyzed directly without any preparation process. • The difference in lipid fingerprints between two fish species were studied. • A statistical model was built and the marker ions for inter-difference were revealed. • Two fish species can be authenticated real-time with accuracy over 98%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Edge based Real-Time Weed Recognition System for Selective Herbicides.
- Author
-
Imran Ahmed, Awais Adnan, Muhammad Islam, and Salim Gul
- Subjects
REAL-time control ,COMPUTER vision ,WEED control ,ROBOTICS ,PATTERN recognition systems ,HERBICIDES ,ALGORITHMS - Abstract
The identification and classification of weeds are of major technical and economical importance in the agricultural industry. To automate these activities, like in shape, color and texture, weed control system is feasible. The goal of this paper is to build a real-time, machine vision weed control system that can detect weed locations. In order to accomplish this objective, a real-time robotic system is developed to identify and locate outdoor plants using machine vision technology and pattern recognition. The algorithm which is based on edge based weed classifier is developed to classify images into broad and narrow class for real-time selective herbicide application. The developed algorithm has been tested on weeds at various locations, which have shown that the algorithm to be very effectiveness in weed identification. Further the results show a very reliable performance on weeds under varying field conditions. The analysis of the results shows over 94 % classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds. [ABSTRACT FROM AUTHOR]
- Published
- 2008
48. Real-time authentication of minced shrimp by rapid evaporative ionization mass spectrometry.
- Author
-
Lu, Weibo, Wang, Pingya, Ge, Lijun, Chen, Xi, Guo, Shunyuan, Zhao, Qiaoling, Zhu, Xiaofang, Cui, Yiwei, Zhang, Min, Chen, Kang, Ding, Yin-Yi, and Shen, Qing
- Subjects
- *
MASS spectrometry , *SHRIMPS , *MULTIVARIATE analysis , *LIPIDOMICS , *NUTRITIONAL value - Abstract
• REIMS method was developed for real-time seven shrimp paste authentication. • The difference among seven shrimp samples was studied. • Multivariate statistical models revealed the potential marker ions for inter-difference. • High accuracy of this real-time recognition model was verified by establishing blind sample tests. Minced shrimp is popular seafood due to its delicious flavor and nutritional value. However, the biological species of raw material of minced shrimp are not distinguished by naked eyes after processing. Thus, an in situ and real-time minced shrimp authentication method was established using iKnife rapid evaporative ionization mass spectrometry (REIMS) based lipidomics. The samples were analyzed under ambient ionization without any tedious preparation step. Seven economic shrimp samples were tested, whose phenotypes were used to develop a real-time recognition model. A total of 19 fatty acids and 45 phospholipid molecular species were efficiently identified and statistically analyzed by multivariate statistical analysis. The results showed that the seven shrimp species were well distinguished, and the most contributing ions at m / z 255.2, 279.2, 301.2, 327.2, 699.5, 742.5, etc., were revealed by variable importance in projection. The proposed iKnife REIMS showed excellent performance in minced shrimp authentication. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Learning and Recognition of Hybrid Manipulation Motions in Variable Environments Using Probabilistic Flow Tubes.
- Author
-
Dong, Shuonan and Williams, Brian
- Subjects
LEARNING ability ,RECOGNITION (Psychology) ,MANIPULATIVE behavior ,MOTION control devices ,PRODUCT demonstrations ,ALGORITHMS - Abstract
For robots to work effectively with humans, they must learn and recognize activities that humans perform. We enable a robot to learn a library of activities from user demonstrations and use it to recognize an action performed by an operator in real time. Our contributions are threefold: (1) a novel probabilistic flow tube representation that can intuitively capture a wide range of motions and can be used to support compliant execution; (2) a method to identify the relevant features of a motion, and ensure that the learned representation preserves these features in new and unforeseen situations; (3) a fast incremental algorithm for recognizing user-performed motions using this representation. Our approach provides several capabilities beyond those of existing algorithms. First, we leverage temporal information to model motions that may exhibit non-Markovian characteristics. Second, our approach can identify parameters of a motion not explicitly specified by the user. Third, we model hybrid continuous and discrete motions in a unified representation that avoids abstracting out the continuous details of the data. Experimental results show a 49 % improvement over prior art in recognition rate for varying environments, and a 24 % improvement for a static environment, while maintaining average computing times for incremental recognition of less than half of human reaction time. We also demonstrate motion learning and recognition capabilities on real-world robot platforms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
50. An approach for real-time recognition of online Chinese handwritten sentences
- Author
-
Wang, Da-Han, Liu, Cheng-Lin, and Zhou, Xiang-Dong
- Subjects
- *
PALEOGRAPHY , *PATTERN recognition systems , *CHINESE language , *IMAGE segmentation , *SENTENCES (Grammar) , *DATABASES - Abstract
Abstract: With the advances of handwriting capturing devices and computing power of mobile computers, pen-based Chinese text input is moving from character-based input to sentence-based input. This paper proposes a real-time recognition approach for sentence-based input of Chinese handwriting. The main feature of the approach is a dynamically maintained segmentation–recognition candidate lattice that integrates multiple contexts including character classification, linguistic context and geometric context. Whenever a new stroke is produced, dynamic text line segmentation and character over-segmentation are performed to locate the position of the stroke in text lines and update the primitive segment sequence of the page. Candidate characters are then generated and recognized to assign candidate classes, and linguistic context and geometric context involving the newly generated candidate characters are computed. The candidate lattice is updated while the writing process continues. When the pen lift time exceeds a threshold, the system searches the candidate lattice for the result of sentence recognition. Since the computation of multiple contexts consumes the majority of computing and is performed during writing process, the recognition result is obtained immediately after the writing of a sentence is finished. Experiments on a large database CASIA-OLHWDB of unconstrained online Chinese handwriting demonstrate the robustness and effectiveness of the proposed approach. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.