195 results on '"Cloud-edge Computing"'
Search Results
2. 6G wireless communication assisted security management using cloud edge computing.
- Author
-
Kamruzzaman, M. M.
- Subjects
- *
WIRELESS communications security , *EDGE computing , *SECURITY management , *CLOUD computing , *ARTIFICIAL intelligence , *DEEP learning , *COMPUTER network security - Abstract
Security management is the process of identifying a company's assets (such as people, buildings, equipment, systems, and information assets) and then developing, documenting, and implementing policies and procedures to secure those assets. Meanwhile, artificial intelligence (AI) applications are flourishing thanks to advances in deep learning and numerous hardware architecture improvements based on cloud edge computing (CEC) issues are associated with the Internet of Things (IoT), including inadequate security measures, user ignorance, and the dreaded active monitoring. Therefore, a 6G wireless communication‐assisted security management using artificial intelligence (WC‐SM‐AI) has been introduced to enhance security. The energy‐efficient 6G real‐time communication framework and the enhanced deep neural network security module are key components of this architecture. The first module optimizes network lifespan and spectral efficiency while reducing energy consumption and latency. Another module offers a more secure network connection while increasing privacy, data integrity, and access. For this reason, this article discusses how AI can strengthen the security of 6G networks while promoting strategy problems and solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Cloud-Edge Computing-Based ICICOS Framework for Industrial Automation and Artificial Intelligence: A Survey.
- Author
-
Su, Weibin, Xu, Gang, He, Zhengfang, Machica, Ivy Kim, Quimno, Val, Du, Yi, and Kong, Yanchun
- Subjects
- *
INDUSTRIAL robots , *ARTIFICIAL intelligence , *DEEP learning , *MACHINE learning , *PROGRAMMABLE controllers , *INTELLIGENT control systems - Abstract
Industrial Automation (IA) and Artificial Intelligence (AI) need an integrated platform. Due to the uncertainty of the time required for training or reasoning tasks, it is difficult to ensure the real-time performance of AI in the factory. Thus in this paper, we carry out a detailed survey on cloud-edge computing-based Industrial Cyber Intelligent Control Operating System (ICICOS) for industrial automation and artificial intelligence. The ICICOS is built based on IEC61499 programming method and used to replace the obsolete Programmable Logic Controller (PLC). It is widely known that the third industrial revolution produced an important device: PLC. But the finite capability of PLC just only adapts automation which will not be able to support AI, especially deep learning algorithms. Edge computing promotes the expansion of distributed architecture to the Internet of Things (IoT), but little effect has been achieved in the territory of PLC. Therefore, ICICOS focuses on virtualization for IA and AI, so we introduce our ICICOS in this paper, and give the specific details. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Efficient federated learning for fault diagnosis in industrial cloud-edge computing
- Author
-
Kai Wang, Peng Zeng, Qizhao Wang, Hong Wang, and Qing Li
- Subjects
Numerical Analysis ,business.industry ,Computer science ,Deep learning ,Distributed computing ,Cloud computing ,Fault (power engineering) ,Computer Science Applications ,Theoretical Computer Science ,Computational Mathematics ,Resource (project management) ,Computational Theory and Mathematics ,Asynchronous communication ,Synchronization (computer science) ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,business ,Software ,Edge computing - Abstract
Federated learning is a deep learning optimization method that can solve user privacy leakage, and it has positive significance in applying industrial equipment fault diagnosis. However, edge nodes in industrial scenarios are resource-constrained, and it is challenging to meet the computational and communication resource consumption during federated training. The heterogeneity and autonomy of edge nodes will also reduce the efficiency of synchronization optimization. This paper proposes an efficient asynchronous federated learning method to solve this problem. This method allows edge nodes to select part of the model from the cloud for asynchronous updates based on local data distribution, thereby reducing the amount of calculation and communication and improving the efficiency of federated learning. Compared with the original federated learning, this method can reduce the resource requirements at the edge, reduce communication, and improve the training speed in heterogeneous edge environments. This paper uses a heterogeneous edge computing environment composed of multiple computing platforms to verify the effectiveness of the proposed method.
- Published
- 2021
5. Efficient federated learning for fault diagnosis in industrial cloud-edge computing.
- Author
-
Wang, Qizhao, Li, Qing, Wang, Kai, Wang, Hong, and Zeng, Peng
- Subjects
- *
FAULT diagnosis , *DEEP learning , *PROBLEM solving , *HETEROGENEOUS computing , *EDGE computing , *ASYNCHRONOUS learning - Abstract
Federated learning is a deep learning optimization method that can solve user privacy leakage, and it has positive significance in applying industrial equipment fault diagnosis. However, edge nodes in industrial scenarios are resource-constrained, and it is challenging to meet the computational and communication resource consumption during federated training. The heterogeneity and autonomy of edge nodes will also reduce the efficiency of synchronization optimization. This paper proposes an efficient asynchronous federated learning method to solve this problem. This method allows edge nodes to select part of the model from the cloud for asynchronous updates based on local data distribution, thereby reducing the amount of calculation and communication and improving the efficiency of federated learning. Compared with the original federated learning, this method can reduce the resource requirements at the edge, reduce communication, and improve the training speed in heterogeneous edge environments. This paper uses a heterogeneous edge computing environment composed of multiple computing platforms to verify the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. AVEC: Accelerator Virtualization in Cloud-Edge Computing for Deep Learning Libraries
- Author
-
Carlos Reaño, Blesson Varghese, and J. Kennedy
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Speedup ,Computer science ,business.industry ,Deep learning ,Cloud computing ,computer.software_genre ,Virtualization ,Computer Science - Networking and Internet Architecture ,CUDA ,Computer Science - Distributed, Parallel, and Cluster Computing ,Operating system ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Graphics ,business ,computer ,Edge computing - Abstract
Edge computing offers the distinct advantage of harnessing compute capabilities on resources located at the edge of the network to run workloads of relatively weak user devices. This is achieved by offloading computationally intensive workloads, such as deep learning from user devices to the edge. Using the edge reduces the overall communication latency of applications as workloads can be processed closer to where data is generated on user devices rather than sending them to geographically distant clouds. Specialised hardware accelerators, such as Graphics Processing Units (GPUs) available in the cloud-edge network can enhance the performance of computationally intensive workloads that are offloaded from devices on to the edge. The underlying approach required to facilitate this is virtualization of GPUs. This paper therefore sets out to investigate the potential of GPU accelerator virtualization to improve the performance of deep learning workloads in a cloud-edge environment. The AVEC accelerator virtualization framework is proposed that incurs minimum overheads and requires no source-code modification of the workload. AVEC intercepts local calls to a GPU on a device and forwards them to an edge resource seamlessly. The feasibility of AVEC is demonstrated on a real-world application, namely OpenPose using the Caffe deep learning library. It is observed that on a lab-based experimental test-bed AVEC delivers up to 7.48x speedup despite communication overheads incurred due to data transfers., Comment: 8 pages, 13 figures
- Published
- 2021
- Full Text
- View/download PDF
7. Intelligent digital-twin prediction and reverse control system architecture for thermal errors enabled by deep learning and cloud-edge computing.
- Author
-
Liu, Jialan, Ma, Chi, Gui, Hongquan, and Wang, Shilong
- Subjects
- *
DEEP learning , *MACHINE tools , *ERROR functions , *PREDICTION models , *FORECASTING , *INDEPENDENT variables - Abstract
The heat generation is significant in the machining process, leading to thermal errors, and finally the geometric precision of machined parts is reduced. So the precision machine tool is a key factor in determining the geometric precision of complex parts. In recent years, the error control method is applied. But the method fails in reducing thermal errors because it cannot effectively process large-volume data, resulting from its low executing efficiency. To solve above issues, a new intelligent digital-twin prediction and reverse control system is designed for thermally induced errors based on the user-edge-cloud architecture to expedite the executing efficiency. The data-driven error modeling method is augmented by an error mechanism-based modeling to express the thermal error as a function with the temperature, armature current, rotational speed, and ambient temperature as independent variables, and then the long-term memorizing behavior of thermal errors is demonstrated. The error model is established based on an improved wavelet threshold denoising (IWTD) and a (long short-term memory) LSTM network to describe the memorizing behavior, and IWTD-LSTM network error prediction model is embedded into the digital-twin system. The digital-twin system and IWTD-LSTM network model were verified on a precision machine tool. With the implementation of the digital-twin system, the thermal error and the volume of the transferred data are reduced by 88.72% and 56.36%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Improved MobileNetV2-SSDLite for automatic fabric defect detection system based on cloud-edge computing.
- Author
-
Zhang, Jiaqi, Jing, Junfeng, Lu, Pengwen, and Song, Shaojun
- Subjects
- *
TEXTILE sales & prices , *DATA warehousing - Abstract
Fabric defect detection is the important step of ensuring the quality and price of textiles. In order to make the automatic fabric defect detection system used in production sites, a cloud–edge collaborative fabric defect system is proposed. Firstly, real-time defect detection is performed on edge device, the accuracy of small defect detection is ensured by improved MobileNetV2-SSDLite. The channel attention mechanism is introduced in the network to highlight defect features and suppress background noise features. The loss function is redefined by Focal Loss to overcome the imbalance of the number of defects and background candidate boxes. Then, detection result storage and model update are carried out in the cloud. Experiments show that the accuracy of the system is improved while maintaining the faster detection speed, among which, the accuracy of the Camouflage dataset with small defects has increased by 10.03% and the detection speed reaches 14.19FPS on NVIDIA Jeston Nano. • In the network improvement, the channel attention mechanism extract more refined defect features under the complex background disturbance, improving small defect detection accuracy. And Focal Loss solve the imbalance between defect and background default boxes number in the training process. • A lightweight network for defect detection in resource-constrained scenarios is proposed. Detection accuracy is improved for small defects and complex background datasets. Our method balances accuracy and inference time better, providing a new option for the embedded platform. • A cloud–edge collaborative fabric defect detection system is proposed. Combining edge devices and the cloud, the detection model is deployed on the edge device for real-time defect detection, data storage and model update at the cloud. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. A Novel Robotic-Vision-Based Defect Inspection System for Bracket Weldments in a Cloud–Edge Coordination Environment
- Author
-
Zhang, Hao Li, Xiaocong Wang, Yan Liu, Gen Liu, Zhongshang Zhai, Xinyu Yan, Haoqi Wang, and Yuyan
- Subjects
steel surface defects inspection ,robotic-vision-based system ,deep learning ,cloud–edge computing - Abstract
Arc-welding robots are widely used in the production of automotive bracket parts. The large amounts of fumes and toxic gases generated during arc welding can affect the inspection results, as well as causing health problems, and the product needs to be sent to an additional checkpoint for manual inspection. In this work, the framework of a robotic-vision-based defect inspection system was proposed and developed in a cloud–edge computing environment, which can drastically reduce the manual labor required for visual inspection, minimizing the risks associated with human error and accidents. Firstly, a passive vision sensor was installed on the end joint of the arc-welding robot, the imaging module was designed to capture bracket weldments images after the arc-welding process, and datasets with qualified images were created in the production line for deep-learning-based research on steel surface defects. To enhance the detection precision, a redesigned lightweight inspection network was then employed, while a fast computation speed was ensured through the utilization of a cloud–edge-computing computational framework. Finally, virtual simulation and Internet of Things technologies were adopted to develop the inspection and control software in order to monitor the whole process remotely. The experimental results demonstrate that the proposed approach can realize the faster identification of quality issues, achieving higher steel production efficiency and economic profits.
- Published
- 2023
- Full Text
- View/download PDF
10. AI for Online Customer Service: Intent Recognition and Slot Filling Based on Deep Learning Technology.
- Author
-
Wu, Yirui, Mao, Wenqin, and Feng, Jun
- Subjects
- *
DEEP learning , *ARTIFICIAL intelligence , *CUSTOMER services , *COMMUNICATION infrastructure , *EDGE computing , *INTELLIGENT networks - Abstract
Cloud/edge computing and deep learning greatly improve performance of semantic understanding systems, where cloud/edge computing provides flexible, pervasive computation and storage capabilities to support variant applications, and deep learning models could comprehend text inputs by consuming computing and storage resource. Therefore, we propose to implement an intelligent online custom service system with power of both technologies. Essentially, task of semantic understanding consists of two subtasks, i.e., intent recognition and slot filling. To prevent error accumulation caused by modeling two subtasks independently, we propose to jointly model both subtasks in an end-to-end neural network. Specifically, the proposed method firstly extracts distinctive features with a dual structure to take full advantage of interactive and level information between two sub-tasks. Afterwards, we introduce attention scheme to enhance feature representation by involving sentence-level context information. With the support of cloud/edge computing infrastructure, we deploy the proposed network to work as an intelligent dialogue system for electrical customer service. During experiments, we test the proposed method and several comparative studies on public ATIS and our collected PSCF dataset. Experiment results prove the effectiveness of the proposed method by obtaining accurate and promising results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. An Efficient CNN to Realize Speckle Correlation Imaging Based on Cloud-Edge for Cyber-Physical-Social-System
- Author
-
Tian Wang, Linli Xu, Jing Han, and Lianfa Bai
- Subjects
0209 industrial biotechnology ,General Computer Science ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,cloud-edge computing ,Cloud computing ,02 engineering and technology ,Iterative reconstruction ,Speckle pattern ,cyber-physical-social systems (CPSS) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Computer vision ,Image resolution ,business.industry ,Scattering ,Deep learning ,low-to-high resolution ,General Engineering ,Cyber-physical system ,self back stacked efficient residual factorized network (SBS-ERFNet) ,020201 artificial intelligence & image processing ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Random scattering medium ,business ,lcsh:TK1-9971 - Abstract
Imaging research on complex samples going through random scattering medias is a very difficult challenge in the field of computational optics. With the deep learning methods applying to the field of speckle correlation image reconstruction, many problems of imaging through strong scattering media and small perturbations of the media which reduces the imaging performance have been resolved. However, due to the randomness of the scattering medium, large and complex samples are required to be trained, deep learning based methods alway occupy a lot of energy consumption on Graphic Processing Unit (GPU). Cyber-Physical-Social Systems (CPSS) integrating the cyber, physical, and social worlds is a key technology to provide proactive and personalized services for humans. This paper proposes an efficient CNN to do speckle image reconstruction based on cloud-edge computing for CPSS, which can achieve higher image resolution by less inputs.In this work, we design a self-back stacked Efficient Residual Factorized Network (SBS-ERFNet) to do image reconstruction through scattering medium. Different from the traditional ERFNets, our framework includes two stages of training, we use this model to study the speckle image from a low-to-high resolution manner. The experiments show that even if a small samples is input for training, the test results can reach a high resolution.
- Published
- 2020
12. A Novel Robotic-Vision-Based Defect Inspection System for Bracket Weldments in a Cloud–Edge Coordination Environment.
- Author
-
Li, Hao, Wang, Xiaocong, Liu, Yan, Liu, Gen, Zhai, Zhongshang, Yan, Xinyu, Wang, Haoqi, and Zhang, Yuyan
- Abstract
Arc-welding robots are widely used in the production of automotive bracket parts. The large amounts of fumes and toxic gases generated during arc welding can affect the inspection results, as well as causing health problems, and the product needs to be sent to an additional checkpoint for manual inspection. In this work, the framework of a robotic-vision-based defect inspection system was proposed and developed in a cloud–edge computing environment, which can drastically reduce the manual labor required for visual inspection, minimizing the risks associated with human error and accidents. Firstly, a passive vision sensor was installed on the end joint of the arc-welding robot, the imaging module was designed to capture bracket weldments images after the arc-welding process, and datasets with qualified images were created in the production line for deep-learning-based research on steel surface defects. To enhance the detection precision, a redesigned lightweight inspection network was then employed, while a fast computation speed was ensured through the utilization of a cloud–edge-computing computational framework. Finally, virtual simulation and Internet of Things technologies were adopted to develop the inspection and control software in order to monitor the whole process remotely. The experimental results demonstrate that the proposed approach can realize the faster identification of quality issues, achieving higher steel production efficiency and economic profits. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Internet-of-Things Edge Computing Systems for Streaming Video Analytics: Trails Behind and the Paths Ahead.
- Author
-
Ravindran, Arun A.
- Subjects
INTERNET of things ,EDGE computing ,STREAMING video & television ,ARTIFICIAL intelligence ,BANDWIDTH allocation - Abstract
The falling cost of IoT cameras, the advancement of AI-based computer vision algorithms, and powerful hardware accelerators for deep learning have enabled the widespread deployment of surveillance cameras with the ability to automatically analyze streaming video feeds to detect events of interest. While streaming video analytics is currently largely performed in the cloud, edge computing has emerged as a pivotal component due to its advantages of low latency, reduced bandwidth, and enhanced privacy. However, a distinct gap persists between state-of-the-art computer vision algorithms and the successful practical implementation of edge-based streaming video analytics systems. This paper presents a comprehensive review of more than 30 research papers published over the last 6 years on IoT edge streaming video analytics (IE-SVA) systems. The papers are analyzed across 17 distinct dimensions. Unlike prior reviews, we examine each system holistically, identifying their strengths and weaknesses in diverse implementations. Our findings suggest that certain critical topics necessary for the practical realization of IE-SVA systems are not sufficiently addressed in current research. Based on these observations, we propose research trajectories across short-, medium-, and long-term horizons. Additionally, we explore trending topics in other computing areas that can significantly impact the evolution of IE-SVA systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Analysis of cutting-edge technologies for enterprise information system and management.
- Author
-
Gupta, Brij Bhooshan, Gaurav, Akshat, Panigrahi, Prabin Kumar, and Arya, Varsha
- Subjects
INFORMATION resources management ,DIGITAL technology ,EDGE computing - Abstract
In the digital age, businesses collect vast amounts of data, but often lack the knowledge and tools to fully utilize it. This paper analyzes current information management practices across different domains and highlights the need for global standards and cutting-edge technologies such as AI, ML, DL, and cloud/edge computing. By examining trends in published research, the study proposes new research lines and provides a comprehensive overview of the field's development, definitions, and trends. The research contributes significantly to the subject of information management and offers a synthesized description of the field's state. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Cloud-Based Deep Learning for Co-Estimation of Battery State of Charge and State of Health.
- Author
-
Shi, Dapai, Zhao, Jingyuan, Wang, Zhenghong, Zhao, Heng, Eze, Chika, Wang, Junbin, Lian, Yubo, and Burke, Andrew F.
- Subjects
DEEP learning ,BATTERY storage plants ,LITHIUM-ion batteries ,ENERGY storage ,NONLINEAR systems ,OBSERVATIONAL learning - Abstract
Rechargeable lithium-ion batteries are currently the most viable option for energy storage systems in electric vehicle (EV) applications due to their high specific energy, falling costs, and acceptable cycle life. However, accurately predicting the parameters of complex, nonlinear battery systems remains challenging, given diverse aging mechanisms, cell-to-cell variations, and dynamic operating conditions. The states and parameters of batteries are becoming increasingly important in ubiquitous application scenarios, yet our ability to predict cell performance under realistic conditions remains limited. To address the challenge of modelling and predicting the evolution of multiphysics and multiscale battery systems, this study proposes a cloud-based AI-enhanced framework. The framework aims to achieve practical success in the co-estimation of the state of charge (SOC) and state of health (SOH) during the system's operational lifetime. Self-supervised transformer neural networks offer new opportunities to learn representations of observational data with multiple levels of abstraction and attention mechanisms. Coupling the cloud-edge computing framework with the versatility of deep learning can leverage the predictive ability of exploiting long-range spatio-temporal dependencies across multiple scales. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. An EEG Signal Recognition Algorithm During Epileptic Seizure Based on Distributed Edge Computing.
- Author
-
Shi Qiu, Keyang Cheng, Tao Zhou, Tahir, Rabia, and Liang Ting
- Subjects
EPILEPSY ,EDGE computing ,DISTRIBUTED computing ,SIGNAL processing ,DIAGNOSIS of epilepsy ,SUDDEN death ,ELECTROENCEPHALOGRAPHY - Abstract
Epilepsy is one kind of brain diseases, and its sudden unpredictability is the main cause of disability and even death. Thus, it is of great significance to identify electroencephalogram (EEG) during the seizure quickly and accurately. With the rise of cloud computing and edge computing, the interface between local detection and cloud recognition is established, which promotes the development of portable EEG detection and diagnosis. Thus, we construct a framework for identifying EEG signals in epileptic seizure based on cloud-edge computing. The EEG signals are obtained in real time locally, and the horizontal viewable model is established at the edge to enhance the internal correlation of the signals. The Takagi-Sugeno-Kang (TSK) fuzzy system is established to analyze the epileptic signals. In the cloud, the fusion of clinical features and signal features is established to establish a deep learning framework. Through local signal acquisition, edge signal processing and cloud signal recognition, the diagnosis of epilepsy is realized, which can provide a new idea for the real-time diagnosis and feedback of EEG during epileptic seizure. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. A Lightweight Framework for Human Activity Recognition on Wearable Devices.
- Author
-
Coelho, Yves Luduvico, Santos, Francisco de Assis Souza dos, Frizera-Neto, Anselmo, and Bastos-Filho, Teodiano Freire
- Abstract
Human Activity Recognition (HAR) is the automatic detection and understanding of human motion behavior based on data extracted from video camera, ambient sensors or wearable sensors which particularly has recently attracted increased attention from both researchers and industry. However, for running practical HAR systems on wearable devices, there are some requirements such as design and development of small, lightweight, powerful, and low-cost smart sensors. In this context, data must be continuously collected, and edge computing is a viable solution, which is an energy-efficient technique, offering real-time response and privacy requirements for HAR applications. Rather than sending data to the cloud, edge computing is a local process that minimizes the data transmission time and responds with low latency. Recently, HAR system designers have adopted deep learning techniques inspired by their outstanding performance in many application areas and achieved relevant gain in activity recognition performance, however, these techniques were not demonstrated suitable for running on resource constrained devices. Thus, designing energy-efficient deep learning models is critical for realizing efficient HAR for mobile applications. In this work, we present a lightweight framework for the deployment of low-power but accurate HAR systems for these devices. We also implement and run the system in a microcontroller and analyze computational cost and energy consumption, and how different system configurations and deep learning model complexity influence on that. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Research on Network Security Protection Technology Based on P2AEDR in New Low-Voltage Control Scenarios for Power IoT and Other Blockchain-Based IoT Architectures.
- Author
-
Miao, Weiwei, Zhao, Xinjian, Li, Nianzhe, Zhang, Song, Li, Qianmu, and Li, Xiaochao
- Subjects
COMPUTER network security ,ACCESS control ,ELECTRIC power distribution grids ,TRUST ,INTERNET of things ,DEEP learning - Abstract
In the construction of new power systems, the traditional network security protection mainly based on boundary protection belongs to static defense and still relies mainly on manual processing in vulnerability repair, threat response, etc. It is difficult to adapt to the security protection needs in large-scale distributed new energy, third-party aggregation platforms, and flexible interaction scenarios with power grid enterprise systems. It is necessary to conduct research on dynamic security protection models for IoT and other Blockchain-based IoT architectures. This article proposes a network security comprehensive protection model P2AEDR based on different interaction modes of cloud–edge interaction and cloud–cloud interaction. Through continuous trust evaluation, dynamic access control, and other technologies, it strengthens the internal defense capabilities of power grid business, shifting from static protection as the core mode to a real-time intelligent perception and automated response mode, and ultimately achieving the goal of dynamic defense, meeting the security protection needs of large-scale controlled terminal access and third-party aggregation platforms. Meanwhile, this article proposes a dynamic trust evaluation algorithm based on deep learning, which protects the secure access and use of various resources in a more refined learning approach based on the interaction information monitored in the system. Through experimental verification of the dynamic trust evaluation algorithm, it is shown that the proposed model has good trust evaluation performance. Therefore, this research is beneficial for trustworthy Power IoT and other Blockchain-based IoT architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Staining-Independent Malaria Parasite Detection and Life Stage Classification in Blood Smear Images.
- Author
-
Xu, Tong, Theera-Umpon, Nipon, and Auephanwiriyakul, Sansanee
- Subjects
CONVOLUTIONAL neural networks ,PLASMODIUM ,PLASMODIUM vivax ,DEEP learning ,MALARIA ,ACQUISITION of data - Abstract
Malaria is a leading cause of morbidity and mortality in tropical and sub-tropical regions. This research proposed a malaria diagnosis system based on the you only look once algorithm for malaria parasite detection and the convolutional neural network algorithm for malaria parasite life stage classification. Two public datasets are utilized: MBB and MP-IDB. The MBB dataset includes human blood smears infected with Plasmodium vivax (P. vivax). While the MP-IDB dataset comprises 4 species of malaria parasites: P. vivax, P. ovale, P. malariae, and P. falciparum. Four distinct stages of life exist in every species, including ring, trophozoite, schizont, and gametocyte. For the MBB dataset, detection and classification accuracies of 0.92 and 0.93, respectively, were achieved. For the MP-IDB dataset, the proposed algorithms yielded the accuracies for detection and classification as follows: 0.84 and 0.94 for P. vivax; 0.82 and 0.93 for P. ovale; 0.79 and 0.93 for P. malariae; and 0.92 and 0.96 for P. falciparum. The detection results showed the models trained by P. vivax alone provide good detection capabilities also for other species of malaria parasites. The classification performance showed the proposed algorithms yielded good malaria parasite life stage classification performance. The future directions include collecting more data and exploring more sophisticated algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Instantaneous 2D extreme wind speed prediction using the novel Wind Gust Prediction Net based on purely convolutional neural mechanism.
- Author
-
Zeguo Zhang and Jianchuan Yin
- Subjects
RECURRENT neural networks ,WIND forecasting ,DEEP learning ,WIND speed ,WIND turbines - Abstract
Accurate prediction of spatial-temporal extreme wind gust is vital for the wind farm dynamic regulation, the floating wind turbine deployment and its early warning. Deep-learning approaches have been applied for wind prediction to alleviate the computational challenges of traditional numerical models. Yet, most previous studies emphasized the prediction accuracy only employing location-specific dataset, such methodologies are site-specific and ignore the importance of spatial-temporal fidelity. Furthermore, the Recurrent Neural Networks (RNN)-based approach previous employed exhibit low efficiency in terms of model convergence and on the aspect of practical engineering purposes. This study firstly proposed the wind gust prediction net (WGPNet), using residual learning with attention modulations to predict the instantaneous spatial-temporal wind gust in the West Pacific region with great potential wind-energy. And a public reanalysis dataset with very high resolution (0.25° x 0.25°) was employed to verify the proposed method under different criteria. The overall RMSE of predicted gust fields obtained by the proposed method dropped to 0.18 m/s. Comprehensive discussions with both temporal and spatial perspective, revealing that the proposed model can offer an accurate 2D wind gust prediction along timeline (the PCC equals to 0.98). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Corun: Concurrent Inference and Continuous Training at the Edge for Cost-Efficient AI-Based Mobile Image Sensing.
- Author
-
Liu, Yu, Andhare, Anurag, and Kang, Kyoung-Don
- Subjects
IMAGE recognition (Computer vision) ,ARTIFICIAL intelligence ,EDGE computing ,MOBILE apps ,SMARTWATCHES ,DEEP learning - Abstract
Intelligent mobile image sensing powered by deep learning analyzes images captured by cameras from mobile devices, such as smartphones or smartwatches. It supports numerous mobile applications, such as image classification, face recognition, and camera scene detection. Unfortunately, mobile devices often lack the resources necessary for deep learning, leading to increased inference latency and rapid battery consumption. Moreover, the inference accuracy may decline over time due to potential data drift. To address these issues, we introduce a new cost-efficient framework, called Corun, designed to simultaneously handle multiple inference queries and continual model retraining/fine-tuning of a pre-trained model on a single commodity GPU in an edge server to significantly improve the inference throughput, upholding the inference accuracy. The scheduling method of Corun undertakes offline profiling to find the maximum number of concurrent inferences that can be executed along with a retraining job on a single GPU without incurring an out-of-memory error or significantly increasing the latency. Our evaluation verifies the cost-effectiveness of Corun. The inference throughput provided by Corun scales with the number of concurrent inference queries. However, the latency of inference queries and the length of a retraining epoch increase at substantially lower rates. By concurrently processing multiple inference and retraining tasks on one GPU instead of using a separate GPU for each task, Corun could reduce the number of GPUs and cost required to deploy mobile image sensing applications based on deep learning at the edge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Computer-Aided Diagnosis Systems for Automatic Malaria Parasite Detection and Classification: A Systematic Review.
- Author
-
Grignaffini, Flavia, Simeoni, Patrizio, Alisi, Anna, and Frezza, Fabrizio
- Subjects
MACHINE learning ,COMPUTER-aided diagnosis ,ARTIFICIAL intelligence ,BLOOD parasites ,PLASMODIUM ,DEEP learning - Abstract
Malaria is a disease that affects millions of people worldwide with a consistent mortality rate. The light microscope examination is the gold standard for detecting infection by malaria parasites. Still, it is limited by long timescales and requires a high level of expertise from pathologists. Early diagnosis of this disease is necessary to achieve timely and effective treatment, which avoids tragic consequences, thus leading to the development of computer-aided diagnosis systems based on artificial intelligence (AI) for the detection and classification of blood cells infected with the malaria parasite in blood smear images. Such systems involve an articulated pipeline, culminating in the use of machine learning and deep learning approaches, the main branches of AI. Here, we present a systematic literature review of recent research on the use of automated algorithms to identify and classify malaria parasites in blood smear images. Based on the PRISMA 2020 criteria, a search was conducted using several electronic databases including PubMed, Scopus, and arXiv by applying inclusion/exclusion filters. From the 606 initial records identified, 135 eligible studies were selected and analyzed. Many promising results were achieved, and some mobile and web applications were developed to address resource and expertise limitations in developing countries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Design and Enhancement of a Fog-Enabled Air Quality Monitoring and Prediction System: An Optimized Lightweight Deep Learning Model for a Smart Fog Environmental Gateway.
- Author
-
Pazhanivel, Divya Bharathi, Velu, Anantha Narayanan, and Palaniappan, Bagavathi Sivakumar
- Subjects
AIR quality monitoring ,AIR quality standards ,DEEP learning ,SMART cities ,AIR quality - Abstract
Effective air quality monitoring and forecasting are essential for safeguarding public health, protecting the environment, and promoting sustainable development in smart cities. Conventional systems are cloud-based, incur high costs, lack accurate Deep Learning (DL)models for multi-step forecasting, and fail to optimize DL models for fog nodes. To address these challenges, this paper proposes a Fog-enabled Air Quality Monitoring and Prediction (FAQMP) system by integrating the Internet of Things (IoT), Fog Computing (FC), Low-Power Wide-Area Networks (LPWANs), and Deep Learning (DL) for improved accuracy and efficiency in monitoring and forecasting air quality levels. The three-layered FAQMP system includes a low-cost Air Quality Monitoring (AQM) node transmitting data via LoRa to the Fog Computing layer and then the cloud layer for complex processing. The Smart Fog Environmental Gateway (SFEG) in the FC layer introduces efficient Fog Intelligence by employing an optimized lightweight DL-based Sequence-to-Sequence (Seq2Seq) Gated Recurrent Unit (GRU) attention model, enabling real-time processing, accurate forecasting, and timely warnings of dangerous AQI levels while optimizing fog resource usage. Initially, the Seq2Seq GRU Attention model, validated for multi-step forecasting, outperformed the state-of-the-art DL methods with an average RMSE of 5.5576, MAE of 3.4975, MAPE of 19.1991%, R
2 of 0.6926, and Theil's U1 of 0.1325. This model is then made lightweight and optimized using post-training quantization (PTQ), specifically dynamic range quantization, which reduced the model size to less than a quarter of the original, improved execution time by 81.53% while maintaining forecast accuracy. This optimization enables efficient deployment on resource-constrained fog nodes like SFEG by balancing performance and computational efficiency, thereby enhancing the effectiveness of the FAQMP system through efficient Fog Intelligence. The FAQMP system, supported by the EnviroWeb application, provides real-time AQI updates, forecasts, and alerts, aiding the government in proactively addressing pollution concerns, maintaining air quality standards, and fostering a healthier and more sustainable environment. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
24. Fabric Defect Detection in Real World Manufacturing Using Deep Learning.
- Author
-
Nasim, Mariam, Mumtaz, Rafia, Ahmad, Muneer, and Ali, Arshad
- Subjects
OBJECT recognition (Computer vision) ,MANUFACTURING industries ,PRICES ,TEXTILES ,PLAINS - Abstract
Defect detection is very important for guaranteeing the quality and pricing of fabric. A considerable amount of fabric is discarded as waste because of defects, leading to substantial annual losses. While manual inspection has traditionally been the norm for detection, adopting an automatic defect detection scheme based on a deep learning model offers a timely and efficient solution for assessing fabric quality. In real-time manufacturing scenarios, datasets lack high-quality, precisely positioned images. Moreover, both plain and printed fabrics are being manufactured in industries simultaneously; therefore, a single model should be capable of detecting defects in all kinds of fabric. So training a robust deep learning model that detects defects in fabric datasets generated during production with high accuracy and lower computational costs is required. This study uses an indigenous dataset directly sourced from Chenab Textiles, providing authentic and diverse images representative of actual manufacturing conditions. The dataset is used to train a computationally faster but lighter state-of-the-art network, i.e., YOLOv8. For comparison, YOLOv5 and MobileNetV2-SSD FPN-Lite models are also trained on the same dataset. YOLOv8n achieved the highest performance, with a mAP of 84.8%, precision of 0.818, and recall of 0.839 across seven different defect classes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Performance Analysis of Deep Learning Model-Compression Techniques for Audio Classification on Edge Devices.
- Author
-
Mou, Afsana and Milanova, Mariofanna
- Subjects
DEEP learning ,IMAGE recognition (Computer vision) ,CONVOLUTIONAL neural networks ,IMAGE registration ,AUDITORY perception ,MUSICAL analysis - Abstract
Audio classification using deep learning models, which is essential for applications like voice assistants and music analysis, faces challenges when deployed on edge devices due to their limited computational resources and memory. Achieving a balance between performance, efficiency, and accuracy is a significant obstacle to optimizing these models for such constrained environments. In this investigation, we evaluate diverse deep learning architectures, including Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM), for audio classification tasks on the ESC 50, UrbanSound8k, and Audio Set datasets. Our empirical findings indicate that Mel spectrograms outperform raw audio data, attributing this enhancement to their synergistic alignment with advanced image classification algorithms and their congruence with human auditory perception. To address the constraints of model size, we apply model-compression techniques, notably magnitude pruning, Taylor pruning, and 8-bit quantization. The research demonstrates that a hybrid pruned model achieves a commendable accuracy rate of 89 percent, which, although marginally lower than the 92 percent accuracy of the uncompressed CNN, strikingly illustrates an equilibrium between efficiency and performance. Subsequently, we deploy the optimized model on the Raspberry Pi 4 and NVIDIA Jetson Nano platforms for audio classification tasks. These findings highlight the significant potential of model-compression strategies in enabling effective deep learning applications on resource-limited devices, with minimal compromise on accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Recent Progress of Deep Learning Methods for Health Monitoring of Lithium-Ion Batteries.
- Author
-
Madani, Seyed Saeed, Ziebert, Carlos, Vahdatkhah, Parisa, and Sadrnezhaad, Sayed Khatiboleslam
- Subjects
LITHIUM-ion batteries ,REMAINING useful life ,BATTERY management systems ,ELECTRIC vehicle industry ,ARTIFICIAL intelligence ,DEEP learning - Abstract
In recent years, the rapid evolution of transportation electrification has been propelled by the widespread adoption of lithium-ion batteries (LIBs) as the primary energy storage solution. The critical need to ensure the safe and efficient operation of these LIBs has positioned battery management systems (BMS) as pivotal components in this landscape. Among the various BMS functions, state and temperature monitoring emerge as paramount for intelligent LIB management. This review focuses on two key aspects of LIB health management: the accurate prediction of the state of health (SOH) and the estimation of remaining useful life (RUL). Achieving precise SOH predictions not only extends the lifespan of LIBs but also offers invaluable insights for optimizing battery usage. Additionally, accurate RUL estimation is essential for efficient battery management and state estimation, especially as the demand for electric vehicles continues to surge. The review highlights the significance of machine learning (ML) techniques in enhancing LIB state predictions while simultaneously reducing computational complexity. By delving into the current state of research in this field, the review aims to elucidate promising future avenues for leveraging ML in the context of LIBs. Notably, it underscores the increasing necessity for advanced RUL prediction techniques and their role in addressing the challenges associated with the burgeoning demand for electric vehicles. This comprehensive review identifies existing challenges and proposes a structured framework to overcome these obstacles, emphasizing the development of machine-learning applications tailored specifically for rechargeable LIBs. The integration of artificial intelligence (AI) technologies in this endeavor is pivotal, as researchers aspire to expedite advancements in battery performance and overcome present limitations associated with LIBs. In adopting a symmetrical approach, ML harmonizes with battery management, contributing significantly to the sustainable progress of transportation electrification. This study provides a concise overview of the literature, offering insights into the current state, future prospects, and challenges in utilizing ML techniques for lithium-ion battery health monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Collaborative Computation Offloading and Resource Management in Space–Air–Ground Integrated Networking: A Deep Reinforcement Learning Approach.
- Author
-
Li, Feixiang, Qu, Kai, Liu, Mingzhe, Li, Ning, and Sun, Tian
- Subjects
REINFORCEMENT learning ,DEEP reinforcement learning ,RESOURCE management ,DEEP learning ,MOBILE computing ,EDGE computing ,NONLINEAR programming - Abstract
With the increasing dissemination of the Internet of Things and 5G, mobile edge computing has become a novel scheme to assist terminal devices in executing computation tasks. To elevate the coverage and computation capability of edge computing, a collaborative computation offloading and resource management architecture was proposed in space–air–ground integrated networking (SAGIN). In this manuscript, we established a novel model considering the computation offloading cost constraints of the communication, computing and cache model in the SAGIN. To be specific, the joint optimization problem of collaborative computation offloading and resource management was modeled as a mixed integer nonlinear programming problem. To address this issue, this paper proposed a computation offloading and resource allocation strategy based on deep reinforcement learning (DRL). Differing from traditional methods, DRL does not need a well-established formulation or previous information, and it is capable of revising the strategy adaptively according to the environment. The simulation results demonstrate the proposed approach can achieve the optimal reward values in the case of different terminal device numbers. Furthermore, this manuscript provided the analysis with variant parameters of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Intrusion detection in cyber-physical system using rsa blockchain technology.
- Author
-
Aljabri, Ahmed, Jemili, Farah, and Korbaa, Ouajdi
- Subjects
CYBER physical systems ,ARTIFICIAL intelligence ,BLOCKCHAINS ,DIFFERENTIAL evolution ,FEATURE selection ,INTRUSION detection systems (Computer security) ,SMART devices ,PUBLIC key cryptography - Abstract
Connected cyber and physical elements exchange information through feedback in a cyber-physical system (CPS). Since CPS oversees the infrastructure, it is an integral part of modern living and is viewed as crucial to the development of cutting-edge smart devices. As the number of CPSs rises, so does the need for intrusion detection systems (IDS). The use of metaheuristic methods and Artificial Intelligence for feature selection and classification can offer solutions to some of the problems caused by the curse of dimensionality. In this research, we present a blockchain-based approach to data security in which blocks are generated using the RSA hashing method. Using Differential Evolution (DE), we first select the blockchain-secured data, and then we partition that data into train and testing datasets to use for training and testing our model. It is also permitted for the validated model to use a deep belief network (DBN) to predict attacks. The purpose of the simulation is to evaluate the safety and precision of the classifications. It turns out that the proposed strategy not only improves classification accuracy but also makes the data more resistant to attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A mobile edge computing-focused transferable sensitive data identification method based on product quantization.
- Author
-
Zhao, Xinjian, Yuan, Guoquan, Qiu, Shuhan, Xu, Chenwei, and Wei, Shanming
- Subjects
NATURAL language processing ,DEEP learning ,ELECTRIC utilities ,MOBILE computing ,EDGE computing ,DATA protection ,SECURITY systems - Abstract
Sensitive data identification represents the initial and crucial step in safeguarding sensitive information. With the ongoing evolution of the industrial internet, including its interconnectivity across various sectors like the electric power industry, the potential for sensitive data to traverse different domains increases, thereby altering the composition of sensitive data. Consequently, traditional approaches reliant on sensitive vocabularies struggle to adequately address the challenges posed by identifying sensitive data in the era of information abundance. Drawing inspiration from advancements in natural language processing within the realm of deep learning, we propose a transferable Sensitive Data Identification method based on Product Quantization, named PQ-SDI. This innovative approach harnesses both the composition and contextual cues within textual data to accurately pinpoint sensitive information within the context of Mobile Edge Computing (MEC). Notably, PQ-SDI exhibits proficiency not only within a singular domain but also demonstrates adaptability to new domains following training on heterogeneous datasets. Moreover, the method autonomously identifies sensitive data throughout the entire process, eliminating the necessity for human upkeep of sensitive vocabularies. Extensive experimentation with the PQ-SDI model across four real-world datasets, resulting in performance improvements ranging from 2% to 5% over the baseline model and achieves an accuracy of up to 94.41%. In cross-domain trials, PQ-SDI achieved comparable accuracy to training and identification within the same domain. Furthermore, our experiments showcased the product quantization technique significantly reduces the parameter size by tens of times for the subsequent sensitive data identification phase, particularly beneficial for resource-constrained environments characteristic of MEC scenarios. This inherent advantage not only bolsters sensitive data protection but also mitigates the risk of data leakage during transmission, thus enhancing overall security measures in MEC environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Combining graph neural network with deep reinforcement learning for resource allocation in computing force networks.
- Author
-
Han, Xueying, Xie, Mingxi, Yu, Ke, Huang, Xiaohong, Du, Zongpeng, and Yao, Huijuan
- Abstract
Copyright of Frontiers of Information Technology & Electronic Engineering is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
31. 基于上下文信息聚合YOLOv5 的织物缺陷检测.
- Author
-
李静 and 郑文斌
- Subjects
FEATURE extraction ,DEEP learning ,LEAKAGE ,PERCENTILES - Abstract
Copyright of Cotton Textile Technology is the property of Cotton Textile Technology Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
32. Intrusion Detection System Using Hybrid Machine Learning Classifiers and Optimum Feature Selection in Internet of Things (IoT).
- Author
-
Ahmed, Naveed, Zahran, Bilal, Ayyoub, Belal, Alzoubaidi, Abdel Rahman, and Ngadi, Md Asri
- Subjects
INTERNET of things ,MACHINE learning ,DEEP learning ,INTRUSION detection systems (Computer security) ,RANDOM forest algorithms - Abstract
This paper has harnessed the extensive IOT2023 dataset, encompassing both legitimate and malicious IoT data, for the purpose of training and testing machine learning classifiers. The performance of the Decision Tree Classifier has been even astonishing: 99% precision, 99% recall, and 99% accuracy. Summarily, the Decision Tree and the Random Forest classifiers have performed exceptionally, where the Random Forest had an upper hand, especially in identifying benign cases precisely, with 99% accuracy and precision. The K-Nearest Neighbors (KNN) classifier has also performed as good, with high precision and recall that gave a 99% accuracy and 92% precision. The choice of the best classifier will be able to depend on the unique needs of an application and in consideration of different valid performance metrics. There is a possibility of further refinement and fine-tuning that may bring these classifiers to a higher level of performance. Therefore, this work will further the state of the art in intrusion detection but also underline the very importance that so far does not receive enough attention in the real-world application: careful selection and refinement of models. Among them, the Random Forest has reached a 99% accuracy, 99% precision, 93% recall, and 84% F1-score. KNN has reached 99% in accuracy, 92% in precision, 82% in recall, and 84% in F1, which has proved to be a robust algorithm to provide a safe IoT ecosystem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Developing an explainable hybrid deep learning model in digital transformation: an empirical study.
- Author
-
Chiu, Ming-Chuan, Chiang, Yu-Hsiang, and Chiu, Jing-Er
- Subjects
DIGITAL transformation ,DEEP learning ,OBJECT recognition (Computer vision) ,DIGITAL learning ,EMPIRICAL research ,ALGORITHMS - Abstract
Automated inspection is an important component of digital transformation. However, most deep learning models that have been widely applied in automated inspection cannot objectively explain the results. Their resulting outcome, known as low interpretability, creates difficulties in finding the root cause of errors and improving the accuracy of the model. This research proposes an integrative method that combines a deep learning object detection model, a clustering algorithm, and a similarity algorithm to achieve an explainable automated detection process. An electronic embroidery case study demonstrates the explainable method, which can quickly be debugged to enhance accuracy. The results show an accuracy during testing of 97.58% with inspection time reduced by 25.93%. This proposed method resolves several challenges involved with automated inspection and digital transformation. Academically, the automated detection deep learning model proposed in this study has high accuracy along with good interpretability and debugability. In practice, this process can speed up the inspection process while saving human effort. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Evaluation of an Indoor Location System Using Edge Computing and Machine Learning Algorithms.
- Author
-
Yauri, Ricardo, Espino, Rafael, and Castro, Antero
- Subjects
MACHINE learning ,EDGE computing ,DECISION trees ,LOCATION data ,BEDROOMS ,SUPPORT vector machines ,DEEP learning ,LIVING rooms - Abstract
The paper aims to evaluate precise location techniques with indoor devices using edge computing technologies, which are important for services such as smart homes and health. Despite their growing importance, indoor locations lack precise and standard methods, especially in complex environments. Solving this is being attempted through technologies such as reconfigurable surfaces and deep learning models, with attention to overcoming the challenges of indoor placement. The main objective of the study is to design a low-cost indoor location system using the ESP32 module and RSSI signals, integrated with embedded machine learning algorithms. The system to be developed will allow determining the location of objects or people with a location device through SSID signals from access points. The main objective is to evaluate the performance of three machine learning algorithms--random forest (RF), decision tree (DT) and support vector machine (SVM)--in the detection of four different locations (bathroom, kitchen, bedroom, and living room), involving the definition of system characteristics, data acquisition, the development of classifiers, and their integration in the ESP32 module to transmit location data wirelessly through the MQTT protocol. As a result of the evaluation, the DT model stands out for its efficiency under limited resource conditions during real-time implementation, but it may face challenges related to overfitting and resources at the implementation stage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A secure data interaction method based on edge computing.
- Author
-
Miao, Weiwei, Xia, Yuanyi, Zhang, Rui, Zhao, Xinjian, Li, Qianmu, Wang, Tao, and Meng, Shunmei
- Subjects
EDGE computing ,DEEP learning - Abstract
Deep learning achieves an outstanding success in the edge scene due to the appearance of lightweight neural network. However, a number of works show that these networks are vulnerable for adversarial examples, bringing security risks. The classical adversarial detection methods are used in white-box setting and show weak performances in black-box setting, like the edge scene. Inspired by the experimental results that different models give various predictions for the same adversarial example with a high probability, we propose a novel adversarial detection method called Ensemble-model Adversarial Detection Method (EADM). EADM defenses the prospective adversarial attack on edge devices by cloud monitoring, which deploys ensemble-model in the cloud and give the most possible label for each input copy received in the edge. The comparison experiment in the assumed edge scene with baseline methods demonstrates the effect of EADM, with a higher defense success rate and a lower false positive rate by an ensemble-model consisted of five pretrained models. The additional ablation experiment explores the influence of different model combinations and adversarial trained models. Besides, the possibility about transfering our method to other fields is discussed, showing the transferability of our method across domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Data sharing and exchanging with incentive and optimization: a survey.
- Author
-
Liu, Liyuan and Han, Meng
- Subjects
INCENTIVE (Psychology) ,MONETARY incentives ,BIG data - Abstract
As the landscape of big data evolves, the paradigm of data sharing and exchanging has gained paramount importance. Nonetheless, the transition to efficient data sharing and exchanging is laden with challenges. One of the principal challenges is incentivizing diverse users to partake in the data sharing and exchange process. Users, especially those in potential competitive positions, often exhibit reluctance towards sharing or exchanging their data, particularly if they perceive the rewards as inadequate. Given this context, it's imperative to institute an incentive mechanism that's not only computationally efficient and secure but also provides both monetary and trustworthy inducements. This study introduces a taxonomy of incentive-based data sharing and exchanging, structured around its lifecycle, and elucidates the challenges inherent in each phase. We classify incentive mechanisms into monetary and non-monetary categories, postulating that the concomitant use of both types of incentives is more effective for data sharing and exchanging applications. Subsequent sections provide an overview of extant literature pertinent to each phase of the data sharing and exchanging lifecycle. In conclusion, we underscore the prevailing challenges in this domain and advocate for intensified efforts to refine the design of incentive mechanisms in data sharing and exchanging. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Computer-Vision-Based Sensing Technologies for Livestock Body Dimension Measurement: A Survey.
- Author
-
Ma, Weihong, Sun, Yi, Qi, Xiangyu, Xue, Xianglong, Chang, Kaixuan, Xu, Zhankang, Li, Mingyu, Wang, Rong, Meng, Rui, and Li, Qifeng
- Subjects
DEEP learning ,LIVESTOCK ,COMPUTER engineering ,COMPUTER graphics ,IMAGE processing ,ECONOMIC indicators - Abstract
Livestock's live body dimensions are a pivotal indicator of economic output. Manual measurement is labor-intensive and time-consuming, often eliciting stress responses in the livestock. With the advancement of computer technology, the techniques for livestock live body dimension measurement have progressed rapidly, yielding significant research achievements. This paper presents a comprehensive review of the recent advancements in livestock live body dimension measurement, emphasizing the crucial role of computer-vision-based sensors. The discussion covers three main aspects: sensing data acquisition, sensing data processing, and sensing data analysis. The common techniques and measurement procedures in, and the current research status of, live body dimension measurement are introduced, along with a comparative analysis of their respective merits and drawbacks. Livestock data acquisition is the initial phase of live body dimension measurement, where sensors are employed as data collection equipment to obtain information conducive to precise measurements. Subsequently, the acquired data undergo processing, leveraging techniques such as 3D vision technology, computer graphics, image processing, and deep learning to calculate the measurements accurately. Lastly, this paper addresses the existing challenges within the domain of livestock live body dimension measurement in the livestock industry, highlighting the potential contributions of computer-vision-based sensors. Moreover, it predicts the potential development trends in the realm of high-throughput live body dimension measurement techniques for livestock. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. DenMerD: a feature enhanced approach to radar beam blockage correction with edge-cloud computing.
- Author
-
Liu, Qi, Sun, Jiawei, Zhang, Yonghong, and Liu, Xiaodong
- Subjects
RADAR ,PROCESS capability ,CLUTTER (Radar) ,EDGE computing ,MOBILE computing ,ELECTRONIC data processing ,DEEP learning ,CLOUD computing - Abstract
In the field of meteorology, the global radar network is indispensable for detecting weather phenomena and offering early warning services. Nevertheless, radar data frequently exhibit anomalies, including gaps and clutter, arising from atmospheric refraction, equipment malfunctions, and other factors, resulting in diminished data quality. Traditional radar blockage correction methods, such as employing approximate radial information interpolation and supplementing missing data, often fail to effectively exploit potential patterns in massive radar data, for the large volume of data precludes a thorough analysis and understanding of the inherent complex patterns and dependencies through simple interpolation or supplementation techniques. Fortunately, edge computing possesses certain data processing capabilities and cloud center boasts substantial computational power, which together can collaboratively offer timely computation and storage for the correction of radar beam blockage. To this end, an edge-cloud collaborative driven deep learning model named DenMerD is proposed in this paper, which includes dense connection module and merge distribution (MD) unit. Compared to existing models such as RC-FCN, DenseNet, and VGG, this model greatly improves key performance metrics, with 30.7 % improvement in Critical Success Index (CSI), 30.1 % improvement in Probability of Detection (POD), and 3.1 % improvement in False Alarm Rate (FAR). It also performs well in the Structure Similarity Index Measure (SSIM) metrics compared to its counterparts. These findings underscore the efficacy of the design in improving feature propagation and beam blockage accuracy, and also highlights the potential and value of mobile edge computing in processing large-scale meteorological data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Online-learning task scheduling with GNN-RL scheduler in collaborative edge computing.
- Author
-
Jian, Chengfeng, Pan, Zhuoyang, Bao, Lukun, and Zhang, Meiyu
- Subjects
DEEP learning ,EDGE computing ,DIGITAL twins ,GRAPH neural networks ,ONLINE algorithms ,SCHEDULING ,ONLINE education - Abstract
With the development of collaborative edge computing (CEC), the manufacturing market is gradually moving toward large-scale, multi-scenario, and dynamic directions. The existing scheduling strategies based on machine learning or deep learning are only applicable to specific scenarios, which is difficult to meet the requirements of dynamic real-time scheduling in multiple scenarios. The proposed digital twin technology provides a new solution for real-time scheduling of multiple scenarios. In this paper, a digital twin-oriented multi-scene real-time scheduler (GNN-RL) is proposed. This scheduler converts task sequences into node trees and sets up two learning layers. The first layer is an online learning representation layer, which uses GNN to learn node features of embedded structures in real time to boost large instances without additional training. The second layer is the online learning policy layer, which introduces imitation learning mappings into optimal scheduling behavior policies adapted to multiple scenarios. Finally, our approach is validated in several scenarios in 3D digital twin factories, such as computationally intensive, delay-sensitive, and task-urgent scenarios. Since the scheduler proposed in this paper learns general features of the embedding graph rather than instance-specific features, it has good generality and scalability, with good generalization and scalability, outperforming other scheduling rules and schedulers on various benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Towards a Unified Pandemic Management Architecture: Survey, Challenges, and Future Directions.
- Author
-
ROY, SATYAKI, GHOSH, NIRNAY, UPLAVIKAR, NITISH, and GHOSH, PREETAM
- Subjects
EDGE computing ,DEEP learning ,MEDICAL personnel ,PANDEMICS ,HEALTH Insurance Portability & Accountability Act ,COVID-19 pandemic ,CLINICAL decision support systems - Published
- 2024
- Full Text
- View/download PDF
41. Text clustering based on pre-trained models and autoencoders.
- Author
-
Qiang Xu, Hao Gu, and ShengWei Ji
- Subjects
LANGUAGE models ,MANAGEMENT of medical records ,HEALTH care industry ,MEDICAL decision making ,INFORMATION retrieval ,TEXT messages - Abstract
Text clustering is the task of grouping text data based on similarity, and it holds particular importance in the medical field. sIn healthcare, medical data clustering is a highly active and e ective research area. It not only provides strong support for making correct medical decisions from medical datasets but also aids in patient record management and medical information retrieval. With the development of the healthcare industry, a large amount of medical data is being generated, and traditional medical data clustering faces significant challenges. Many existing text clustering algorithms are primarily based on the bag-of-words model, which has issues such as high dimensionality, sparsity, and the neglect of word positions and context. Pre-trained models are a deep learning-based approach that treats text as a sequence to accurately captureword positions and context information. Moreover, compared to traditional K-means and fuzzy C-means clustering models, deep learning-based clustering algorithms are better at handling high-dimensional, complex, and nonlinear data. In particular, clustering algorithms based on autoencoders can learn data representations and clustering information, e ectively reducing noise interference and errors during the clustering process. This paper combines pre-trained language models with deep embedding clustering models. Experimental results demonstrate that our model performs exceptionally well on four public datasets, outperforming most existing text clustering algorithms, and can be applied to medical data clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. AnomalySeg: Deep Learning-Based Fast Anomaly Segmentation Approach for Surface Defect Detection.
- Author
-
Song, Yongxian, Xia, Wenhao, Li, Yuanyuan, Li, Hao, Yuan, Minfeng, and Zhang, Qi
- Subjects
SURFACE defects ,FEEDFORWARD neural networks ,IMAGE segmentation - Abstract
Product quality inspection is a crucial element of industrial manufacturing, yet flaws such as blemishes and stains frequently emerge after the product is completed. Most research has utilized detection models and avoided segmenting networks due to the unequal distribution of faulty information. To overcome this challenge, this work presents a rapid segmentation-based technique for surface defect detection. The proposed model is based on a modified U-Net, which introduces a hybrid residual module (SAFM), combining an improved spatial attention mechanism and a feedforward neural network in place of the remaining downsampling layers, except for the first layer of downsampling in the encoder, and applies this residual module to the decoder structure. Dilated convolutions are also incorporated in the decoder to obtain more spatial information about the feature defects and to reduce the gradient vanishing problem of the model. An improved hybrid loss function with Dice and focal loss is introduced to alleviate the small defect segmentation problem. Comparative experiments were conducted on different segmentation-based inspection methods, revealing that the Dice coefficient (DSC) evaluated by the proposed approach is better than previous generic segmentation benchmarks on KolektorSDD, KolektorSDD2, and RSDD datasets, with fewer parameters and FLOPs. Additionally, the detection network displays higher precision in recognizing the characteristics of minor flaws. This paper proposes a practical and effective technique for anomaly segmentation in surface defect identification, delivering considerable improvements over previous methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Beyond games: a systematic review of neural Monte Carlo tree search applications.
- Author
-
Kemmerling, Marco, Lütticke, Daniel, and Schmitt, Robert H.
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,TREES ,REINFORCEMENT learning ,GAMES - Abstract
The advent of AlphaGo and its successors marked the beginning of a new paradigm in playing games using artificial intelligence. This was achieved by combining Monte Carlo tree search, a planning procedure, and deep learning. While the impact on the domain of games has been undeniable, it is less clear how useful similar approaches are in applications beyond games and how they need to be adapted from the original methodology. We perform a systematic literature review of peer-reviewed articles detailing the application of neural Monte Carlo tree search methods in domains other than games. Our goal is to systematically assess how such methods are structured in practice and if their success can be extended to other domains. We find applications in a variety of domains, many distinct ways of guiding the tree search using learned policy and value functions, and various training methods. Our review maps the current landscape of algorithms in the family of neural monte carlo tree search as they are applied to practical problems, which is a first step towards a more principled way of designing such algorithms for specific problems and their requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Large-scale Video Analytics with Cloud--Edge Collaborative Continuous Learning.
- Author
-
YA NAN, SHIQI JIANG, and MO LI
- Subjects
DEEP learning ,COLLABORATIVE learning ,STREAMING video & television ,VIDEO on demand ,RESOURCE allocation ,VIDEOS - Abstract
Deep learning--based video analytics demands high network bandwidth to ferry the large volume of data when deployed on the cloud. When incorporated at the edge side, only lightweight deep neural network (DNN) models are affordable due to computational constraint. In this article, a cloud--edge collaborative architecture is proposed combining edge-based inference with cloud-assisted continuous learning. Lightweight DNNmodels are maintained at the edge servers and continuously retrainedwith a more comprehensive model on the cloud to achieve high video analytics performance while reducing the amount of data transmitted between edge servers and the cloud. The proposed design faces the challenge of constraints of both computation resources at the edge servers and network bandwidth of the edge--cloud links. An accuracy gradient-based resource allocation algorithm is proposed to allocate the limited computation and network resources across different video streams to achieve the maximum overall performance. A prototype system is implemented and experiment results demonstrate the effectiveness of our system with up to 28.6% absolute mean average precision gain compared with alternative designs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. A Survey of Next-generation Computing Technologies in Space-air-ground Integrated Networks.
- Author
-
ZHISHU SHEN, JIONG JIN, CHENG TAN, TAGAMI, ATSUSHI, SHANGGUANG WANG, QING LI, QIUSHI ZHENG, and JINGLING YUAN
- Subjects
EDGE computing ,DEEP learning ,ARTIFICIAL intelligence ,REAL-time computing ,INFORMATION technology ,SCIENCE conferences ,ARTIFICIAL neural networks - Published
- 2024
- Full Text
- View/download PDF
46. Resource Management in Mobile Edge Computing: A Comprehensive Survey.
- Author
-
XIAOJIE ZHANG and DEBROY, SAPTARSHI
- Subjects
MOBILE computing ,EDGE computing ,RESOURCE management ,DEEP learning ,RESOURCE allocation ,INTERNET of things - Abstract
With the evolution of 5G and Internet of Things technologies, Mobile Edge Computing (MEC) has emerged as a major computing paradigm. Compared to cloud computing, MEC integrates network control, computing, and storage to customizable, fast, reliable, and secure distributed services that are closer to the user and data site. Although a popular research topic, MEC resource management comes in many forms due to its emerging nature and there exists little consensus in the community. In this survey, we present a comprehensive review of existing research problems and relevant solutions within MEC resource management. We first describe the major problems in MEC resource allocation when the user applications have diverse performance requirements. We discuss the unique challenges caused by the dynamic nature of the environments and use cases where MEC is adopted. We also explore and categorize existing solutions that address such challenges. We particularly explore traditional optimization-based methods and deep learning-based approaches. In addition, we take a deeper dive into the most popular applications and use cases that adopt MEC paradigm and how MEC provides customized solutions for each use cases, in particular, video analytics applications. Finally, we outline the open research challenges and future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. 基于改进YOLOv5 的玻璃纤维管纱缺陷检测方法.
- Author
-
董振宇 and 景军锋
- Subjects
COMPUTER vision ,DEEP learning ,YARN ,TUBES - Abstract
Copyright of Cotton Textile Technology is the property of Cotton Textile Technology Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
48. DeepSafety: a deep neural network-based edge computing framework for detecting unsafe behaviors of construction workers.
- Author
-
Zhang, Ji, Liu, Chia-Chun, and Ying, Josh Jia-Ching
- Abstract
Recently, the development and application of artificial intelligence have received widespread research attentions, and one of important applications is accident prevention. Since most accidents on construction sites are caused by construction works' unsafe behaviors, unsafe behavior detection is desired. Unlike traditional detection model which focuses only on accuracy of detection and ignore efficiency of detection, a deep neural network-based edge computing framework is proposed for detecting unsafe behavior not only efficiently but also precisely. To address efficiency issue, an object detection model and a posture estimation model are established on edge device for extracting feature from surveillance camera streaming, a time series classification model is developed on server for detecting unsafe behavior according to the features extracted from edge devices. Finally, a comprehensive experimental study based on three datasets collected from three real construction sites is conducted. The results showed that the models proposed in this study can achieve 87% in terms of accuracy and 1.5 s in terms of latency. The improve rate of the proposed DeepSafety is higher than 25% in terms of accuracy and 80% in terms of latency. Accordingly, the proposed edge-computing based framework is shown to deliver excellent performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Collaborative Cloud-V. Edge System for Predicting Traffic Accident Risk Using Machine Learning Based IOV.
- Author
-
Djazia, Zeroual, Kazar, Okba, Harous, Saad, and Benharzallah, Saber
- Subjects
SMART cities ,TRAFFIC accidents ,INFORMATION & communication technologies - Abstract
Smart city development is profoundly impacted by cuttingedge technologies such as information and communications technology (ICT), artificial intelligence (AI), and the Internet of Things (IoT). The intelligent transportation system (ITS) is one of the main requirements of a smart city. The application of machine learning (ML) technology in the development of driver assistance systems, has improved the safety and the comfort of the experience of traveling by road. In this work, we propose an intelligent driving system for road accident risks prediction that can extract maximum required information to alert the driver in order to avoid risky situations that may cause traffic accidents. The current acceptable Internet-of-vehicle (IOV) solutions rely heavily on the cloud, as it has virtually unlimited storage and processing power. However, the Internet disconnection problem and response time are constraining its use. In this case, the concept of vehicular edge computing (V.Edge.C) can overcome these limitations by leveraging the processing and storage capabilities of simple resources located closer to the end user, such as vehicles or roadside infrastructure. We propose an Intelligent and Collaborative Cloud-V.Edge Driver Assistance System (ICEDAS) framework based on machine learning to predict the risks of traffic accidents. The proposed framework consists of two models, CLOUD_DRL and V.Edge_DL, Each one complements the other. Together, these models work to enhance the effectiveness and accuracy of crash prediction and prevention. The obtained results show that our system is efficient and it can help to reduce road accidents and save thousands of citizens' lives. [ABSTRACT FROM AUTHOR]
- Published
- 2023
50. MulticloudFL: Adaptive Federated Learning for Improving Forecasting Accuracy in Multi-Cloud Environments.
- Author
-
Stefanidis, Vasilis-Angelos, Verginadis, Yiannis, and Mentzas, Gregoris
- Subjects
FEDERATED learning ,DEEP learning ,TECHNOLOGICAL innovations ,FORECASTING ,ELECTRONIC data processing ,CLOUD computing - Abstract
Cloud computing and relevant emerging technologies have presented ordinary methods for processing edge-produced data in a centralized manner. Presently, there is a tendency to offload processing tasks as close to the edge as possible to reduce the costs and network bandwidth used. In this direction, we find efforts that materialize this paradigm by introducing distributed deep learning methods and the so-called Federated Learning (FL). Such distributed architectures are valuable assets in terms of efficiently managing resources and eliciting predictions that can be used for proactive adaptation of distributed applications. In this work, we focus on deep learning local loss functions in multi-cloud environments. We introduce the MulticloudFL system that enhances the forecasting accuracy, in dynamic settings, by applying two new methods that enhance the prediction accuracy in applications and resources monitoring metrics. The proposed algorithm's performance is evaluated via various experiments that confirm the quality and benefits of the MulticloudFL system, as it improves the prediction accuracy on time-series data while reducing the bandwidth requirements and privacy risks during the training process. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.