320 results on '"Neal N."'
Search Results
2. TANTO: An Effective Trust-Based Unmanned Aerial Vehicle Computing System for the Internet of Things
- Author
-
Jing Bai, Zhiwen Zeng, Tian Wang, Shaobo Zhang, Neal N. Xiong, and Anfeng Liu
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Signal Processing ,Computer Science Applications ,Information Systems - Published
- 2023
3. UWPEE: Using UAV and wavelet packet energy entropy to predict traffic-based attacks under limited communication, computing and caching for 6G wireless systems
- Author
-
Zichao Xie, Zeyuan Li, Jinsong Gui, Anfeng Liu, Neal N. Xiong, and Shaobo Zhang
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Software - Published
- 2023
4. Joint Architecture Design and Workload Partitioning for DNN Inference on Industrial IoT Clusters
- Author
-
Weiwei Fang, Wenyuan Xu, Chongchong Yu, and Neal. N. Xiong
- Subjects
Computer Networks and Communications - Abstract
The advent of Deep Neural Networks (DNNs) has empowered numerous computer-vision applications. Due to the high computational intensity of DNN models, as well as the resource constrained nature of Industrial Internet-of-Things (IIoT) devices, it is generally very challenging to deploy and execute DNNs efficiently in the industrial scenarios. Substantial research has focused on model compression or edge-cloud offloading, which trades off accuracy for efficiency or depends on high-quality infrastructure support, respectively. In this article, we present EdgeDI, a framework for executing DNN inference in a partitioned, distributed manner on a cluster of IIoT devices. To improve the inference performance, EdgeDI exploits two key optimization knobs, including: (1) Model compression based on deep architecture design, which transforms the target DNN model into a compact one that reduces the resource requirements for IIoT devices without sacrificing accuracy; (2) Distributed inference based on adaptive workload partitioning, which achieves high parallelism by adaptively balancing the workload distribution among IIoT devices under heterogeneous resource conditions. We have implemented EdgeDI based on PyTorch, and evaluated its performance with the NEU-CLS defect classification task and two typical DNN models (i.e., VGG and ResNet) on a cluster of heterogeneous Raspberry Pi devices. The results indicate that the proposed two optimization approaches significantly outperform the existing solutions in their specific domains. When they are well combined, EdgeDI can provide scalable DNN inference speedups that are very close to or even much higher than the theoretical speedup bounds, while still maintaining the desired accuracy.
- Published
- 2023
5. Intelligent Delay-Aware Partial Computing Task Offloading for Multiuser Industrial Internet of Things Through Edge Computing
- Author
-
Shahid Mumtaz, Neal N. Xiong, Jian Yin, Peiyuan Guan, Lan Zhang, and Xiaoheng Deng
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,Quality of service ,Distributed computing ,Multi-user ,Computer Science Applications ,Task (computing) ,Hardware and Architecture ,Server ,Signal Processing ,Reinforcement learning ,Edge computing ,Information Systems ,Communication channel - Abstract
The development of Industrial Internet of Things (IIoT) and Industry 4.0 has completely changed the traditional manufacturing industry. Intelligent IIoT technology usually involves a large number of intensive computing tasks. Resource-constrained IIoT devices often cannot meet the real-time requirements of these tasks. As a promising paradigm, Mobile Edge Computing (MEC) system migrates the computation intensive tasks from resource-constrained IIoT devices to nearby MEC servers, thereby obtaining lower delay and energy consumption. However, considering the varying channel conditions as well as the distinct delay requirements for various computing tasks, it is challenging to coordinate the computing task offloading among multiple users. In this paper, we propose an autonomous partial offloading system for delay sensitive computation tasks in multi-user IIoT MEC systems. Our goal is to provide offloading services with minimum delay for better Quality of Service (QoS). Enlighten by the recent advancement of Reinforcement Learning (RL), we propose two RL based offloading strategies to automatically optimize the delay performance. Specifically, we first implement Q-learning algorithm to provide a discrete partial offloading decision. Then, to further optimize the system performance with more flexible task offloading, the offloading decisions are given as continuous based on Deep Deterministic Policy Gradient (DDPG). The simulation results show that, the Q-learning scheme reduces the delay by 23%, and the DDPG scheme reduces the delay by 30%.
- Published
- 2023
6. RSIS: A Secure and Reliable Secret Image Sharing System Based on Extended Hamming Codes in Industrial Internet of Things
- Author
-
Lizhi Xiong, Xiao Han, Xinwei Zhong, Ching-Nung Yang, and Neal N. Xiong
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Signal Processing ,Computer Science Applications ,Information Systems - Published
- 2023
7. SCTD: A spatiotemporal correlation truth discovery scheme for security management of data platform
- Author
-
Wen Mo, Zeyuan Li, Zhiwen Zeng, Neal N. Xiong, Shaobo Zhang, and Anfeng Liu
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Software - Published
- 2023
8. An intelligent hybrid method: Multi-objective optimization for MEC-enabled devices of IoE
- Author
-
Kuanishbay Sadatdiynov, Laizhong Cui, Lei Zhang, Joshua Zhexue Huang, Neal N. Xiong, and Chengwen Luo
- Subjects
Artificial Intelligence ,Computer Networks and Communications ,Hardware and Architecture ,Software ,Theoretical Computer Science - Published
- 2023
9. BTS: A Blockchain-Based Trust System to Deter Malicious Data Reporting in Intelligent Internet of Things
- Author
-
Anfeng Liu, Neal N. Xiong, Qiang Li, Kaoru Ota, Wei Liu, Ting Li, and Mianxiong Dong
- Subjects
Scheme (programming language) ,Blockchain ,Data collection ,Computer Networks and Communications ,Computer science ,Computer security ,computer.software_genre ,Drone ,Computer Science Applications ,Data Standard ,Hardware and Architecture ,Signal Processing ,Reinforcement learning ,Deterrence theory ,Data reporting ,computer ,Information Systems ,computer.programming_language - Abstract
Recent developments in collection, computation and communication have expanded the way of data reporting in intelligent Internet of Things (IoT). However, diversity and complexity of data sources also impose new trust challenge in data collection process since untrust reporters tend to report false or even malicious data, which highlights the need to develop a novel methodology to solve such challenge. Thus, based on this domain, inspired by deterrence theory, this paper proposes a blockchain-based trust system with assistant of drones to deter malicious data reporting in intelligent IoT. Specifically, to deter malicious data reporting, based on blockchain technology, the data sensed by fully-trusted drones is public published on blockchain showing participants the data standards, named as malicious deterrence scheme. This scheme provides a barrier for malicious reporters to arbitrarily publish false data to blockchain, since the false data can be easily detected while they cannot deny. Secondly, to further reduce malicious data reporting, a strict penalty mechanism is proposed to punish malicious reporters who have reported false data to blockchain to reduce the mali-cious data reporting in the following task through punishment. Thirdly, note that the sensing of data standard generates addi-tional costs, therefore, a drone flight route scheme based on a simper Deep Reinforcement Learning with Multi-head Attention mechanism (MA-DRL) is designed to reduce the flight distance for drones. Finally, extensive experiments demonstrate efficiency of our proposed system in terms of reducing malicious data reporting in advance as well as reducing drone flight distance.
- Published
- 2022
10. A Privacy-Preserving Proximity Testing Using Private Set Intersection for Vehicular Ad-Hoc Networks
- Author
-
Liping Zhang, Wenhao Gao, Shukai Chen, Wei Ren, Kim-Kwang Raymond Choo, and Neal N. Xiong
- Subjects
Control and Systems Engineering ,Electrical and Electronic Engineering ,Computer Science Applications ,Information Systems - Published
- 2022
11. TDTA: A truth detection based task assignment scheme for mobile crowdsourced Industrial Internet of Things
- Author
-
Rui Zhang, Zeyuan Li, Neal N. Xiong, Shaobo Zhang, and Anfeng Liu
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2022
12. MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing
- Author
-
Shun Liu, Qiang Yang, Shaobo Zhang, Tian Wang, and Neal N. Xiong
- Subjects
Artificial Intelligence ,Computer Networks and Communications ,Hardware and Architecture ,Software ,Theoretical Computer Science - Published
- 2022
13. A Short-Term Traffic Flow Prediction Model Based on an Improved Gate Recurrent Unit Neural Network
- Author
-
Wanneng Shu, Neal N. Xiong, and Ken Cai
- Subjects
Set (abstract data type) ,Traffic flow (computer networking) ,Artificial neural network ,Series (mathematics) ,Flow (mathematics) ,Computer science ,Mechanical Engineering ,Automotive Engineering ,Convergence (routing) ,Intelligent transportation system ,Algorithm ,Computer Science Applications ,Term (time) - Abstract
With the increasing demand for intelligent transportation systems, short-term traffic flow prediction has become an important research direction. The memory unit of a Long Short-Term Memory (LSTM) neural network can store data characteristics over a certain period of time, hence the suitability of this network for time series processing. This paper uses an improved Gate Recurrent Unit (GRU) neural network to study the time series of traffic parameter flows. The LSTM short-term traffic flow prediction based on the flow series is first investigated, and then the GRU model is introduced. The GRU can be regarded as a simplified LSTM. After extracting the spatial and temporal characteristics of the flow matrix, an improved GRU with a bidirectional positive and negative feedback called the Bi-GRU prediction model is used to complete the short-term traffic flow prediction and study its characteristics. The Rectified Adaptive (RAdam) model is adopted to improve the shortcomings of the common optimizer. The cosine learning rate attenuation is also used for the model to avoid converging to the local optimal solution and for the appropriate convergence speed to be controlled. Furthermore, the scientific and reliable model learning rate is set together with the adaptive learning rate in RAdam. In this manner, the accuracy of network prediction can be further improved. Finally, an experiment of the Bi-GRU model is conducted. The comprehensive Bi-GRU prediction results demonstrate the effectiveness of the proposed method.
- Published
- 2022
14. DRLR: A Deep-Reinforcement-Learning-Based Recruitment Scheme for Massive Data Collections in 6G-Based IoT Networks
- Author
-
Zhiwen Zeng, Neal N. Xiong, Ting Li, and Wei Liu
- Subjects
Scheme (programming language) ,Computer Networks and Communications ,Computer science ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Hardware and Architecture ,Software deployment ,Signal Processing ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Latency (engineering) ,computer ,5G ,Energy (signal processing) ,Information Systems ,computer.programming_language ,Network model - Abstract
Recently, rapid deployment on fifth-generation (5G) networks has brought great opportunities for the enabling data-intensive applications, and brings an extending expectation on the developments of 6G. A basic requirement to develop 6G networks is to reach data with low latency, low cost and high coverage in smart IoT. Therefore, this paper proposes a novel machine learning based approach to collect data from multiple sensor devices by cooperation between vehicle and UAV in IoT. Firstly, genetic algorithm is utilized to select vehicular collectors to collect massive data from sensor devices, which aims to maximize coverage ratio and to minimize employment cost. Secondly, we design a novel Deep Reinforcement Learning (DRL)-based route policy to plan collection routes of UAVs with constrain energy, which simplifies the network model, accelerates training speeds and realizes dynamic planning of flight paths. The optimal collection route of an UAV is a series of outputs based on the proposed DRL-based route policy. Finally, our extensive experiments demonstrate that the proposed scheme can comprehensively improve the coverage ratio of massive data collections and reduce collection costs in smart IoT for the future 6G networks.
- Published
- 2022
15. Design and Analysis of an Efficient Multiresource Allocation System for Cooperative Computing in Internet of Things
- Author
-
Xiaoqi Zhang, Hongju Cheng, Neal N. Xiong, and Zhiyong Yu
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,media_common.quotation_subject ,Reliability (computer networking) ,Cloud computing ,Data loss ,Adaptability ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Bandwidth (computing) ,Reinforcement learning ,Enhanced Data Rates for GSM Evolution ,business ,Time complexity ,Information Systems ,media_common - Abstract
By migrating tasks from the end devices to the edge or cloud, cooperative computing in Internet of Things can support time-sensitive, high-dimensional, and complex applications while utilizing existing resources such as network bandwidth, computing resources, and storage capacity. How to design the multi-resource allocation system efficiently is a significant research problem. In this paper, we design a Multi-Resource Allocation system for cooperative computing in Internet of Things based on Deep Reinforcement Learning by redefining latency calculation models for communication, computation, and caching with consideration of practical interference factors such as Gaussian noise and data loss. The proposed system uses actor-critic as the base model for rapidly approximating the optimal policy by updating parameters of the actor and critic in respective gradient directions. The balance control parameter is introduced to fit the desired learning rate and actual learning rate. At the same time, we use the method of double experience pool to limit the exploration direction of the optimal policy, which reduces the time complexity and space complexity of the problem solution and improves the adaptability and reliability of the scheme. Experiments have demonstrated that DRL-MRA performs well in terms of average service latency under resource-constrained conditions, and the improvement is significant with the increase of network size.
- Published
- 2022
16. ADCC: An effective adaptive duty cycle control scheme for real time big data in Green IoT
- Author
-
Khamael M. Abualnaja, Zhiwen Zeng, Neal N. Xiong, and Jing Bai
- Subjects
Duty cycle ,Scheme (programming language) ,business.industry ,Computer science ,Real time big data ,Real-time computing ,Big data ,Control (management) ,General Engineering ,Green IoT ,Engineering (General). Civil engineering (General) ,Design principle ,TA1-2040 ,Internet of Things ,business ,computer ,Lifetime ,computer.programming_language - Abstract
Currently, large number of devices have been connected to the Internet of Things (IoT). Ubiquitous IoT devices encounter emergencies and generate much data which may lead to congestion and urgency to be processed timely. To alleviate it, many approaches have been proposed and Adaptive Duty Cycle Control (ADCC) scheme is an effective one. The main contributions of this paper are as follows: (a) Unlike previous studies that mostly the efficiency of congestion control and real-time processing is experiment-oriented, specially, this paper theoretically gives a targeted optimization of duty cycle ratio, which can effectively guide the design of ADCC scheme for real time big data. (b) A general design principle is proposed to guide the scheme designing in order to improve the efficiency and performance of networks. (c) A comprehensive congestion avoidance and real-time processing scheme combining dynamic duty cycle adjustment and full utilization of residual energy is proposed. Thus, these ideas can meet the concept of Green IoT. Through our extensively theoretical and experimental analysis, the principle can effectively guide the design of ADCC scheme, and can reduce the delay, data drop ratio by 20.95% − 77.85% and 29.63% − 100% respectively, without affecting network lifetime, compared with previous scheme.
- Published
- 2022
17. Learning background-aware and spatial-temporal regularized correlation filters for visual tracking
- Author
-
Jianming Zhang, Yaoqi He, Wenjun Feng, Jin Wang, and Neal N. Xiong
- Subjects
Artificial Intelligence - Published
- 2022
18. Revocable and Privacy-Preserving Decentralized Data Sharing Framework for Fog-Assisted Internet of Things
- Author
-
Neal N. Xiong, Jianfeng Ma, Jiawei Zhang, Yanbo Yang, and Ximeng Liu
- Subjects
Revocation ,Computer Networks and Communications ,business.industry ,Computer science ,Access control ,Cloud computing ,Encryption ,Computer security ,computer.software_genre ,Computer Science Applications ,Data sharing ,Hardware and Architecture ,Data integrity ,Signal Processing ,Overhead (computing) ,business ,computer ,Key escrow ,Information Systems - Abstract
Fog-assisted Internet of Things (IoT) can outsource the massive data of resource-constraint IoT devices to cloud and fog nodes. Meanwhile, it enables convenient and low time-delay data sharing services which relies heavily on high security of data confidentiality and fine-grained access control. Many efforts have been focused on this urgent requirement by leveraging Ciphertext-Policy Attribute-Based Encryption (CP-ABE). However, when deployed in Fog-assisted IoT systems for secure data sharing, it remains a challenging problem that how to preserve attribute privacy of access policy, and trace-then-revoke traitors (i.e., malicious users intending to leak decryption keys for illegal profits) efficiently and securely in such a large scale and decentralized environment with resource-constraint user devices, especially in consideration of misbehaving cloud and fog nodes. Therefore, in this paper, we propose a revocable and privacy-preserving decentralized data sharing framework (RPDDSF) by designing a large universe and multi-authority CP-ABE scheme with fully hidden access policy for secure data sharing in IoT systems to achieve user attribute privacy preserving with unbounded attribute universe and key escrow resistance suitable for large scale and decentralized environment. Based on this, with RPDDSF, anyone can efficiently expose the traitors and punish them by forward/backward secure revocation. Besides, RPDDSF is able to guarantee data integrity for both data owners and users to resist misbehaving cloud and fog nodes, alongwith low computation overhead for resource-constraint devices. Finally, RPDDSF is proven to be secure with detailed security proofs, and its high efficiency and feasibility are demonstrated by extensive performance evaluations.
- Published
- 2022
19. EDMF: Efficient Deep Matrix Factorization With Review Feature Learning for Industrial Recommender System
- Author
-
Jiazhang Wang, Zhaoli Zhang, Chao Zheng, Xiaoxuan Shen, Hai Liu, Duantengchuan Li, Zhen Zhang, Neal N. Xiong, and Ke Lin
- Subjects
Computer science ,business.industry ,Recommender system ,Machine learning ,computer.software_genre ,Convolutional neural network ,Computer Science Applications ,Matrix decomposition ,Interactivity ,Control and Systems Engineering ,Prior probability ,Feature (machine learning) ,Maximum a posteriori estimation ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Feature learning ,Information Systems - Abstract
Recommendation accuracy is a fundamental problem in the quality of the recommendation system. In this paper, we propose an efficient deep matrix factorization with review feature learning for the industrial recommender system (EDMF). Two characteristics in user's review are revealed. First, interactivity between the user and the item, which can also be considered as the former's scoring behavior on the latter, is exploited in a review. Second, the review is only a partial description of the user's preferences for the item, which is revealed as the sparsity property. Specifically, in the first characteristic, EDMF extracts the interactive features of onefold review by convolutional neural networks with word attention mechanism. Subsequently, L0 norm is leveraged to constrain the review considering that the review information is a sparse feature, which is the second characteristic. Furthermore, the loss function is constructed by maximum a posteriori estimation theory, where the interactivity and sparsity property are converted as two prior probability functions. Finally, the alternative minimization algorithm is introduced to optimize the loss functions. Experimental results on several datasets demonstrate that the proposed methods, which show good industrial conversion application prospects, outperform the state-of-the-art methods in terms of effectiveness and efficiency.
- Published
- 2022
20. An Efficient Computing Offloading Scheme Based on Privacy-Preserving in Mobile Edge Computing Networks
- Author
-
Shanchen Pang, Huanhuan Sun, Min Wang, Shuyu Wang, Sibo Qiao, and Neal N. Xiong
- Subjects
Article Subject ,Computer Networks and Communications ,Electrical and Electronic Engineering ,Information Systems - Abstract
Computation offloading is an important technology to achieve lower delay communication and improve the experience of service (EoS) in mobile edge computing (MEC). Due to the openness of wireless links and the limitation of computing resources in mobile computing process, the privacy of users is easy to leak, and the completion time of tasks is difficult to guarantee. In this paper, we propose an efficient computing offloading algorithm based on privacy-preserving (ECOAP), which solves the privacy problem of offloading users through the encryption technology. To avoid the algorithm falling into local optimum and reduce the offloading user energy consumption and task completion delay in the case of encryption, we use the improved fast nondominated sorting genetic algorithm (INSGA-II) to obtain the optimal offloading strategy set. We obtain the optimal offloading strategy by using the methods of min-max normalization and simple additive weighting based on the optimal offloading strategy set. The ECOAP algorithm can preserve user privacy and reduce task completion time and user energy consumption effectively by comparing with other algorithms.
- Published
- 2022
21. SG-PBFT: A secure and highly efficient distributed blockchain PBFT consensus algorithm for intelligent Internet of vehicles
- Author
-
Guangquan Xu, Hongpeng Bai, Jun Xing, Tao Luo, Neal N. Xiong, Xiaochun Cheng, Shaoying Liu, and Xi Zheng
- Subjects
Artificial Intelligence ,Computer Networks and Communications ,Hardware and Architecture ,Software ,Theoretical Computer Science - Published
- 2022
22. BPT: A Blockchain-Based Privacy Information Preserving System for Trust Data Collection Over Distributed Mobile-Edge Network
- Author
-
Neal N. Xiong, Shangsheng Xie, Kaoru Ota, Wei Liu, Qiang Li, Ting Li, and Mianxiong Dong
- Subjects
Data collection ,Exploit ,Computer Networks and Communications ,business.industry ,Computer science ,Throughput ,Cloud computing ,Computer security ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Differential privacy ,Verifiable secret sharing ,business ,computer ,Wireless sensor network ,Information Systems ,Block (data storage) - Abstract
Contemporarily, fast development of computing, communication and storage technology has revolutionized the way that various data-based applications reach massive data from underlying sensor networks. However, such process also raises two challenging but critical issues: trustworthy and privacy issue for data collectors. Therefore, this paper pro-poses a novel system, which is designed over the distributed mobile edge network to sufficiently exploit advantages of blockchain and differential privacy to collect trustworthy data and protect privacy for data collectors. Firstly, to im-prove trustworthy of data collections, a new consensus mechanism is proposed for blockchain-based data collection structure, which comprehensively incorporates trustworthy, collection contribution, and throughput together to prefer data collectors for the next block. Secondly, with assistance of fully trusted devices, a verifiable trustworthy evaluation strategy is designed to accurately compute the trustworthy for data collec-tors. Thirdly, we enforce differential privacy on the data stored in global blockchain maintained by the cloud server to protect privacy for data collectors without influencing data availability. Finally, both theoretical analyses and experi-mental results prove that the proposed system comprehen-sively improves performance of data collections in distributed network without adding any additional cost for cloud server, compared to other schemes.
- Published
- 2022
23. Trusted Resource Allocation Based on Smart Contracts for Blockchain-Enabled Internet of Things
- Author
-
Xiaoqi Zhang, Neal N. Xiong, Yang Yang, Hongju Cheng, Zhiyong Yu, and Qiaohong Hu
- Subjects
Service quality ,Computer Networks and Communications ,End user ,Computer science ,media_common.quotation_subject ,Computer security ,computer.software_genre ,Computer Science Applications ,Shared resource ,Resource (project management) ,Hardware and Architecture ,Server ,Signal Processing ,Resource allocation ,Enhanced Data Rates for GSM Evolution ,computer ,Information Systems ,Reputation ,media_common - Abstract
By sharing resources between edge servers and end users, edge-end cooperation is one important way to support various applications in Internet of Things which have critical resource requirements on computing, storage or bandwidth. How to price these resources and how to evaluate the service quality of edge servers are two key issues to support trusted resource allocation for blockchain-enabled Internet of Things. In this paper, we provide a trusted resource allocation mechanism based on smart contracts, in which a group-buying pricing mechanism and a reputation evaluation mechanism are proposed to effectively address the problems existing in resources pricing and service quality evaluation of edge servers. In the trusted resource allocation mechanism, end users can choose a purchase mode from four pricing schemes in terms of actual demands on delay and price, and smart contracts can match end users with high-reputation edge servers automatically. Moreover, end users can submit reputation evaluations to smart contracts based on the behaviors of edge servers. Simulation results show the group-buying pricing mechanism can provide differentiated prices and optimize the utility of end users accordingly, while the reputation evaluation mechanism is more sensitive to edge servers with irregular behaviors and quickly reduces their reputations so that the success rate of transactions is improved.
- Published
- 2022
24. A Survey of Weakly-supervised Semantic Segmentation
- Author
-
Kaiyin Zhu, Neal N. Xiong, and Mingming Lu
- Published
- 2023
25. FedGraph-KD: An Effective Federated Graph Learning Scheme Based on Knowledge Distillation
- Author
-
Shiyu Wang, Jiahao Xie, Mingming Lu, and Neal N. Xiong
- Published
- 2023
26. HCNCT:A Cross-chain Interaction Scheme for the Blockchain-based Metaverse
- Author
-
Yongjun Ren, Zhiying Lv, Neal N. Xiong, and Jin Wang
- Subjects
Computer Networks and Communications ,Hardware and Architecture - Abstract
As a new type of digital living space that blends virtual and reality, Metaverse combines many emerging technologies. It provides an immersive experience based on VR technology, and stores and protects users’ digital content and digital assets through blockchain technology. However, different virtual environments are often highly heterogeneous in terms of underlying architecture and software implementation technology, which leads to many challenges in scalability and interoperability for blockchains serving the Metaverse. Cross-chain technology is an essential technology to realize the scalability and interoperability of blockchain. However, the current cross-chain technologies all have their own merits and demerits, and there is no cross-chain solution that can be fully applied to any scenario. To this end, in the blockchain-based Metaverse, this paper proposes a cross-chain transaction scheme based on improved hash timelock, HCNCT. By combining the notary mechanism, this scheme uses a group of notaries to supervise and participate in cross-chain transactions, effectively solving the problem that malicious users create a large number of time-out transactions to block the transaction channel, which exists in the traditional hash timelock method. Besides, this paper uses the verifiable secret sharing method in the notary group, which can effectively prevent the centralization problem of the notary mechanism. Moreover, this paper discusses the process of key processing, cross-chain transaction and transaction verification of the scheme, and designs the user credibility evaluation mechanism, which can effectively reduce the occurrence of malicious default of users. Compared with existing solutions, our solution has the advantage of effectively addressing time-out transaction attacks and centralization issues while guaranteeing security. The experiments also verify the effectiveness of the proposed scheme.
- Published
- 2023
27. Self-Supervised Learning of Depth and Ego-motion for 3D Perception in Human Computer Interaction
- Author
-
Shanbao Qiao, Neal N. Xiong, Yongbin Gao, Zhijun Fang, Wenjun Yu, Juan Zhang, and Xiaoyan Jiang
- Subjects
Computer Networks and Communications ,Hardware and Architecture - Abstract
3D perception of depth and ego-motion is of vital importance in intelligent agent and Human Computer Interaction (HCI) tasks, such as robotics and autonomous driving. There are different kinds of sensors that can directly obtain 3D depth information. However, the commonly used Lidar sensor is expensive, and the effective range of RGB-D cameras is limited. In the field of computer vision, researchers have done a lot of work on 3D perception. While traditional geometric algorithms require a lot of manual features for depth estimation, Deep Learning methods have achieved great success in this field. In this work, we proposed a novel self-supervised method based on Vision Transformer (ViT) with Convolutional Neural Network (CNN) architecture, which is referred to as ViT-Depth. The image reconstruction losses computed by the estimated depth and motion between adjacent frames are treated as supervision signal to establish a self-supervised learning pipeline. This is an effective solution for tasks that need accurate and low-cost 3D perception, such as autonomous driving, robotic navigation, 3D reconstruction, etc. Our method could leverage both the ability of CNN and Transformer to extract deep features and capture global contextual information. In addition, we propose a cross-frame loss that could constrain photometric error and scale consistency among multi-frames, which lead the training process to be more stable and improve the performance. Extensive experimental results on autonomous driving dataset demonstrate the proposed approach is competitive with the state-of-the-art depth and motion estimation methods.
- Published
- 2023
28. BeatClass: A Sustainable ECG Classification System in IoT-Based eHealth
- Author
-
Yilin Wang, Neal N. Xiong, Zhiguo Qu, and Le Sun
- Subjects
Heartbeat ,Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Ectopic beat ,medicine.disease ,Machine learning ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,cardiovascular system ,eHealth ,medicine ,Artificial intelligence ,Ventricular ectopic ,Multiple classification ,Internet of Things ,business ,computer ,Mobile device ,Information Systems - Abstract
With the rapid development of Internet of Things (IoT), it becomes convenient to use mobile devices to remotely monitor the Physiological signals (e.g., Arrhythmia diseases) of patients with chronic diseases (e.g., cardiovascular diseases (CVDs)). High classification accuracy of inter-patient ECGs is extremely important for diagnosing Arrhythmia. The Supraventricular ectopic beat (S) is especially difficult to be classified. It is often misclassified as Normal (N) or Ventricular ectopic beat (V). Class imbalance is another common and important problem in electronic health (eHealth), as abnormal samples (i.e., samples of specific diseases) are usually far less than normal samples. To solve these problems, we propose a sustainable deep learning-based heartbeat classification system, called BeatClass. It contains three main components: two stacked bidirectional Long Short-Term Memory Networks (Bi-LSTMs), called Rist and Morst, and a Generative Adversarial Network (GAN), called MorphGAN. Rist first classifies the heartbeats into five common Arrhythmia classes. The heartbeats classified as S and V by Rist are further classified by Morst to improve the classification accuracy. MorphGAN is used to augment the morphological and contextual knowledge of heartbeats in infrequent classes. In the experiment, BeatClass is compared with several state-of-the-art works for inter-patient arrhythmia classification. The F1-scores of classifying N, S and V heartbeats are 0.6%, 16.0% and 1.8% higher than the best baseline method. The experiment result demonstrates that taking multiple classification models to improve classification results step-by-step may significantly improve the classification performance. We also evaluate the classification sustainability of BeatClass. Based on different physical signal datasets, a trained BeatClass can be updated to classify heartbeats with different sampling rates. At last, an engineering application indicates that BeatClass can promote the sustainable development of IoT-based eHealth.
- Published
- 2022
29. TMA-DPSO: Towards Efficient Multi-Task Allocation With Time Constraints for Next Generation Multiple Access
- Author
-
Mingfeng Huang, Victor C. M. Leung, Anfeng Liu, and Neal N. Xiong
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
30. Multi-Scale Dynamic Convolutional Network for Knowledge Graph Embedding
- Author
-
Hai Liu, Neal N. Xiong, Zhifei Li, and Zhaoli Zhang
- Subjects
Theoretical computer science ,Relation (database) ,Computer science ,Feature extraction ,Knowledge engineering ,02 engineering and technology ,Object (computer science) ,Computer Science Applications ,Computational Theory and Mathematics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Embedding ,Representation (mathematics) ,Information Systems ,Semantic matching - Abstract
Knowledge graphs are large graph-structured knowledge bases with incomplete or partial information. Numerous studies have focused on knowledge graph embedding to identify the embedded representation of entities and relations, thereby predicting missing relations between entities. Previous embedding models primarily regard (subject entity, relation, and object entity) triplet as translational distance or semantic matching in vector space. However, these models only learn a few expressive features and hard to handle complex relations, i.e., 1-to-N, N-to-1, and N-to-N, in knowledge graphs. To overcome these issues, we introduce a multi-scale dynamic convolutional network (M-DCN) model for knowledge graph embedding. This model features topnotch performance and an ability to generate richer and more expressive feature embeddings than its counterparts. The subject entity and relation embeddings in M-DCN are composed in an alternating pattern in the input layer, which helps extract additional feature interactions and increase the expressiveness. Multi-scale lters are generated in the convolution layer to learn different characteristics among input embeddings. Specically, the weights of these lters are dynamically related to each relation to model complex relations. The performance of M-DCN on the ve benchmark datasets is tested via experiments. Results show that the model can effectively handle complex relations and achieve state-of-the-art link prediction results on most evaluation metrics.
- Published
- 2022
31. An Intelligent Game-Based Offloading Scheme for Maximizing Benefits of IoT-Edge-Cloud Ecosystems
- Author
-
Neal N. Xiong, Mingyue Yu, Anfeng Liu, and Tian Wang
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Model of computation ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Networking hardware ,Computer Science Applications ,Fictitious play ,Task (computing) ,symbols.namesake ,Hardware and Architecture ,Nash equilibrium ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,symbols ,Computation offloading ,020201 artificial intelligence & image processing ,business ,Information Systems - Abstract
Nowadays, with the explosive growth of sensor-based devices connected to Internet of Thing (IoT), massive amount of data are generated every day with potential tremendous value. We argue that the value of those data can be extracted through monetize data platform in IoT-Edge-Cloud ecosystems for many parts of the business. In such monetize data platform, the data can be computed and transformed into services in IoT-Edge-Cloud ecosystems and provide Data-As-A-Service (DAAS) for applications. The key to implement such a monetize data platform is to evenly distribute DAAS computing tasks to network devices to maximize the benefits of the system. So, in this paper, we study the Task Type-based Computation Offloading algorithm (TTCO) to implement such platform. We use the "IoT-Edge-Cloud" three-layer multi-hop model, which is closer to the complex scene in monetize data platform. We divide tasks into data-intensive tasks and CPU-intensive tasks, and then combine the cost model of computation offloading with task type to make data-intensive tasks prefer local computing and CPU-intensive tasks prefer offload computing, thereby reducing the monetize data platform transmission volume and improving the overall quality of computation offloading. We then use a hierarchical game model combined with fictitious play to solve the Nash Equilibrium (NE) of the system and obtain the mixed strategies of the devices. Finally, we propose a TTL-constrained flood strategy transmission mechanism to make the algorithm apply to practice. The experimental results prove that our algorithm has a large performance gain in various scenarios, which can be severed as a monetize data platform for IoT-Edge-Cloud ecosystems.
- Published
- 2022
32. A Deep Reinforcement Learning-Based Resource Management Game in Vehicular Edge Computing
- Author
-
Yueyi Luo, Mianxiong Dong, Anfeng Liu, Neal N. Xiong, Shaobo Zhang, and Xiaoyu Zhu
- Subjects
Computer science ,business.industry ,Mechanical Engineering ,Computer Science Applications ,Resource (project management) ,Server ,Automotive Engineering ,Stackelberg competition ,Reinforcement learning ,Computation offloading ,Resource management ,Latency (engineering) ,business ,Intelligent transportation system ,Computer network - Abstract
Vehicular Edge Computing (VEC) is a promising paradigm that leverages the vehicles to offload computation tasks to the nearby VEC server with the aim of supporting the low latency vehicular application scenarios. Incentivizing VEC servers to participate in computation offloading activities and make full use of computation resources is of great importance to the success of intelligent transportation services. In this paper, we formulate the competitive interactions between the VEC servers and vehicles as a two-stage Stackelberg game with the VEC servers as the leader players and the vehicles as the followers. After obtaining the full information of vehicles, the VEC server calculates the unit price of computation resource. Given the unit prices announced by VEC server, the vehicles determine the amount of computation resource to purchase from VEC server. In the scenario that vehicles do not want to share their computation demands, a deep reinforcement learning based resource management scheme is proposed to maximize the profits of vehicles and VEC server. The extensive experimental results have demonstrated the effectiveness of our proposed resource management scheme based on Stackelberg game and deep reinforcement learning.
- Published
- 2022
33. Ensuring Cryptography Chips Security by Preventing Scan-Based Side-Channel Attacks With Improved DFT Architecture
- Author
-
Xiangqi Wang, Jin Wang, Peng Liu, Neal N. Xiong, Weizheng Wang, and Shuo Cai
- Subjects
Password ,0209 industrial biotechnology ,business.industry ,Computer science ,Cryptography ,02 engineering and technology ,Encryption ,Chip ,Computer Science Applications ,Human-Computer Interaction ,020901 industrial engineering & automation ,Control and Systems Engineering ,Running key cipher ,Obfuscation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Side channel attack ,Electrical and Electronic Engineering ,business ,Software ,Computer hardware ,Shift register - Abstract
Cryptography chips are often used in some applications, such as smart grids and Internet of Things (IoT) to ensure their security. Cryptographic chips must be strictly tested to guarantee the correctness of the encryption and decryption. Scan-based design-for-testability (DFT) provides high test quality. However, it can also be misused to steal the cipher key of cryptographic chips by hackers. In this article, we present a new scan design methodology that can resist scan-based side-channel attacks by the dynamical obfuscation of scan input data and scan output data. The scan test is managed by a test password, which consists of load password and scan password. When the chip enters into the test mode, it is required to apply the test password via some external input ports. Once the correct load password is delivered, the scan password can be loaded into a special shift register. If the scan password is also correct, the chip testing can proceed normally. In case the load password or the scan password is wrong, the data in scan chains cannot be propagated correctly. Specifically, some elusory bits are sneaked into scan chains dynamically. The advantage of the proposed method is that it has no negative impact on design performance and test flow when powerfully protecting cryptographic chips. The area penalty is also acceptably low compared with other schemes.
- Published
- 2022
34. An end-to-end deep learning model for robust smooth filtering identification
- Author
-
Yujin Zhang, Luo Yu, Neal N. Xiong, Zhijun Fang, Haiyue Tian, and Lijun Zhang
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Convolutional neural network ,Discriminative model ,Hardware and Architecture ,Frequency domain ,Median filter ,Preprocessor ,Artificial intelligence ,business ,Software ,Block (data storage) - Abstract
Smooth filtering, a common blurring and denoising operator, has often been utilized postoperatively to diminish the traces left by malicious manipulations. Most of the existing forensic methods only focus on one specific filtering artifact such as median filtering, which is insufficient to reveal the manipulation history of digital images. Unlike traditional convolutional neural network (CNN)-based networks, which normally introduce handcrafted features, including frequency domain features and median filtering residuals, into the preprocessing layer, this paper proposes an end-to-end deep learning model for robust smooth filtering identification. First, a distinctive network structure named the Squeeze-and-Excitation (SE) block is introduced to select discriminative features adaptively and suppress the irrelevant features to the smooth filtering effect. Then, as the network depth increases, multiple inception-residual blocks are stacked to extract discriminative features and reduce the information loss. Finally, different smooth filtering operations can be classified through learning hierarchical features. The experimental results on a composite database show that the proposed model outperforms the state-of-the-art methods, especially in small size and JPEG compression scenarios.
- Published
- 2022
35. Anomaly Detection Based on Convolutional Recurrent Autoencoder for IoT Time Series
- Author
-
Neal N. Xiong, Sun Zhang, Jin Wang, and Chunyong Yin
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Deep learning ,Feature extraction ,Pattern recognition ,02 engineering and technology ,Autoencoder ,Convolutional neural network ,Computer Science Applications ,Human-Computer Interaction ,020901 industrial engineering & automation ,Control and Systems Engineering ,Sliding window protocol ,0202 electrical engineering, electronic engineering, information engineering ,Preprocessor ,020201 artificial intelligence & image processing ,Anomaly detection ,Artificial intelligence ,Data pre-processing ,Electrical and Electronic Engineering ,business ,Software - Abstract
Internet of Things (IoT) realizes the interconnection of heterogeneous devices by the technology of wireless and mobile communication. The data of target regions are collected by widely distributed sensing devices and transmitted to the processing center for aggregation and analysis as the basis of IoT. The quality of IoT services usually depends on the accuracy and integrity of data. However, due to the adverse environment or device defects, the collected data will be anomalous. Therefore, the effective method of anomaly detection is the crucial issue for guaranteeing service quality. Deep learning is one of the most concerned technology in recent years which realizes automatic feature extraction from raw data. In this article, the integrated model of the convolutional neural network (CNN) and recurrent autoencoder is proposed for anomaly detection. Simple combination of CNN and autoencoder cannot improve classification performance, especially, for time series. Therefore, we utilize the two-stage sliding window in data preprocessing to learn better representations. Based on the characteristics of the Yahoo Webscope S5 dataset, raw time series with anomalous points are extended to fixed-length sequences with normal or anomaly label via the first-stage sliding window. Then, each sequence is transformed into continuous time-dependent subsequences by another smaller sliding window. The preprocessing of the two-stage sliding window can be considered as low-level temporal feature extraction, and we empirically prove that the preprocessing of the two-stage sliding window will be useful for high-level feature extraction in the integrated model. After data preprocessing, spatial and temporal features are extracted in CNN and recurrent autoencoder for the classification in fully connected networks. Empiric results show that the proposed model has better performances on multiple classification metrics and achieves preferable effect on anomaly detection.
- Published
- 2022
36. Design and Analysis of a Prediction System About Influenza-Like Illness From the Latent Temporal and Spatial Information
- Author
-
Haiyan Wang, Xiaoxiang Guo, Jingli Ren, and Neal N. Xiong
- Subjects
Multivariate statistics ,Computer science ,Multivariable calculus ,Chaotic ,Missing data ,Regression ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,Control and Systems Engineering ,Kernel (statistics) ,Statistics ,Gaussian function ,symbols ,Electrical and Electronic Engineering ,Spatial analysis ,Software - Abstract
Influenza poses a significant risk to public health, as evidenced by the 2009 H1N1 pandemic which caused up to 203,000 deaths worldwide. Predicting the spatiotemporal information of disease in the incubation period is crucial because the prime aim of it is to provide guidance on preparing a response and avoid presumably adverse impact caused by a pandemic. This article designs and analyzes a prediction system about influenza-like illness (ILI) from the latent temporal and spatial information. In this system 1) Gaussian function model and multivariate polynomial regression are employed to investigate the temporal and spatial distribution of ILI data; 2) the phase space reconstructed by delay-coordinate embedding is used to explore the dynamical evolution behavior of the 1-D ILI series; and 3) a dynamical radial basis function neural network (DRBFNN) method which is the kernel of the system, is proposed to predict the ILI values based on the correlations between the observations space and reconstructed phase space. The performance analysis of our system shows that the regression equations coupling with spatial distribution information can be used to supplement the missing data, and the proposed DRBFNN method can predict the trends of ILI for the following one year. Furthermore, the prediction system in this article applies a model-free control schemes, i.e., there are no restriction equations between the multivariable inputs and outputs. This prediction system is expected to be used in predicting the output signals, even the chaotic output signals, in meteorology, industry, medicine, economy, and other fields. An example of predicting the Standard & Poors 500 index is given to introduce the application of our proposed system. The trend of open prices of the following eight trading days is well predicted.
- Published
- 2022
37. Coverless Video Steganography Based on Frame Sequence Perceptual Distance Mapping
- Author
-
Runze Li, Jiaohua Qin, Yun Tan, and Neal N. Xiong
- Subjects
Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2022
38. Multi-perspective social recommendation method with graph representation learning
- Author
-
Hai Liu, Duantengchuan Li, Ke Lin, Jiazhang Wang, Zhaoli Zhang, Neal N. Xiong, Chao Zheng, and Xiaoxuan Shen
- Subjects
Information retrieval ,Computer science ,Cognitive Neuroscience ,Rationality ,Recommender system ,Python (programming language) ,Social relation ,Computer Science Applications ,Artificial Intelligence ,Graph (abstract data type) ,Construct (philosophy) ,computer ,Feature learning ,computer.programming_language ,Social influence - Abstract
Social recommender systems (SRS) aim to study how social relations influence users’ choices and how to use them for better learning users embeddings. However, the diversity of social relationships, which is instructive to the propagation of social influence, has been rarely explored. In this paper, we propose a graph convolutional network based representation learning method, namely multi-perspective social recommendation (MPSR), to construct hierarchical user preferences and assign friends’ influences with different levels of trust at varying perspectives. We further utilize the attributes of items to partition and excavate users’ explicit preferences and employ complementary perspective modeling to learn implicit preferences of users. To measure the trust degree of friends from different perspectives, the statistical information of users’ historical behavior is utilized to construct multi-perspective social networks. Experimental results on two public datasets of Yelp and Ciao demonstrate that the MPSR significantly outperforms the state-of-the-art methods. Further detailed analysis verifies the importance of mining explicit characteristics of users and the necessity for diverse social relationships, which show the rationality and effectiveness of the proposed model. The source Python code will be available upon request.
- Published
- 2022
39. Safety Analysis of Riding at Intersection Entrance Using Video Recognition Technology
- Author
-
Xingjian Xue, Linjuan Ge, Longxin Zeng, Weiran Li, Rui Song, and Neal N. Xiong
- Subjects
Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2022
40. Reversible Data Hiding in Encrypted Images Based on Adaptive Prediction and Labeling
- Author
-
Jiaohua Qin, Zhibin He, Xuyu Xiang, and Neal N. Xiong
- Subjects
Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2022
41. Criss-Cross Attentional Siamese Networks for Object Tracking
- Author
-
Zhangdong Wang, Jiaohua Qin, Xuyu Xiang, Yun Tan, and Neal N. Xiong
- Subjects
Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2022
42. SPPS: A Search Pattern Privacy System for Approximate Shortest Distance Query of Encrypted Graphs in IIoT
- Author
-
Jia Yu, Hanlin Zhang, Jianxi Fan, Xinrui Ge, Jianli Bai, and Neal N. Xiong
- Subjects
Security analysis ,Service (systems architecture) ,business.industry ,Computer science ,Cloud computing ,Space (commercial competition) ,Encryption ,Data structure ,Computer Science Applications ,Outsourcing ,Human-Computer Interaction ,Control and Systems Engineering ,Leverage (statistics) ,Electrical and Electronic Engineering ,business ,Software ,Computer network - Abstract
In recent years, Industrial Internet of Things (IIoT) has gradually attracted the attention of the industry owing to its accurate time synchronization, communication accuracy, and high adaptability. As an important data structure, graphs are widely used in IIoT applications, where entities and their relationships can be expressed in the form of graphs. With the widespread adoption of IIoT and cloud computing, an increasing number of individuals or organizations are outsourcing their IIoT graph data to cloud servers to enjoy the unlimited storage space and fast computing service. To protect the privacy of graph data, graphs are usually encrypted before being outsourced. In this article, we propose a search pattern privacy system for approximate shortest distance query of encrypted graphs in IIoT. To realize search pattern privacy, we adopt two noncolluded cloud servers to accomplish different tasks. We leverage the first server to store the encrypted data and perform query operations, and use the second one to rerandomize the contents and shuffle the locations of the queried records. Before queries, we generate the trapdoors by using different random numbers. After queries, we ask the second server to rerandomize the contents of the records that the first server touched. In addition, we shuffle the physical locations of original records by inserting some fake records. In this way, all contents and physical locations of the touched records change, so that the first server cannot distinguish whether two queries are the same or not. To enhance the efficiency on the user side, we further improve this system by moving some heavy workloads from the user to the cloud. The security analysis and the performance evaluation show that our work is secure and efficient.
- Published
- 2022
43. Aortic Dissection Diagnosis Based on Sequence Information and燚eep燣earning
- Author
-
Haikuo Peng, Yun Tan, Hao Tang, Ling Tan, Xuyu Xiang, Yongjun Wang, and Neal N. Xiong
- Subjects
Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2022
44. Confidentially Computing DNA Matching Against Malicious Adversaries
- Author
-
Xiaofen Tu, Xin Liu, Xiangyu Hu, Baoshan Li, and Neal N. Xiong
- Published
- 2023
45. LowFreqAttack: An Frequency Attack Method in Time Series Prediction
- Author
-
Neal N. Xiong, Wenyong He, and Mingming Lu
- Published
- 2023
46. A Direction Vector-Guided Multi-Objective Evolutionary Algorithm for Variable Linkages Problems
- Author
-
Qinghua Gu, Shaopeng Zhang, Qian Wang, and Neal N. Xiong
- Subjects
History ,Polymers and Plastics ,Business and International Management ,Industrial and Manufacturing Engineering - Published
- 2023
47. An Energy-Efficient Authentication Scheme Based on Chebyshev Chaotic Map for Smart Grid Environments
- Author
-
Liping Zhang, Kim-Kwang Raymond Choo, Wei Ren, Yue Zhu, Neal N. Xiong, and Yinghan Wang
- Subjects
Modular exponentiation ,Approximation theory ,Authentication ,Chebyshev polynomials ,Computer Networks and Communications ,Computer science ,Distributed computing ,Chebyshev filter ,Computer Science Applications ,Smart grid ,Hardware and Architecture ,Signal Processing ,Key (cryptography) ,Computer Science::Cryptography and Security ,Information Systems ,Efficient energy use - Abstract
Electric vehicle charging is becoming more commonplace, but a number of challenges remain. For example, the wireless communications between vehicle users and aggregators can be subject to exploitation and hence, several authentication schemes have been designed to support varying levels of privacy protection. However, there are a number of limitations observed in existing authentication schemes, and examples include lack of anonymity and not considering charging peak in their design (and consequently, not meeting low energy consumption requirement in smart grid environments). More recently, there have been attempts to utilize Chevyshev chaotic map in the design of authentication mechanism, with the aims of reducing computational costs yet achieving high security. However, the security requirements of Chebyshev polynomials pose new challenges to the construction of Chebyshev chaotic maps-based authentication schemes. To solve these limitations, we propose an efficient Chebyshev polynomials algorithm by adopting a square matrix-based binary exponentiation algorithm to provide secure and efficient Chebyshev polynomial computation. We further construct an energy-efficient authentication and key negotiation scheme for the smart grid environments based on the proposed algorithm. Compared with five other competing schemes, our proposed authentication scheme achieves reduced computational and communication costs. In addition, the ProVerif tool is used to analyze the security of our proposed authentication scheme. The results show that the proposed scheme outperforms these five other schemes in terms of computation and communication overheads while achieving privacy preserving.
- Published
- 2021
48. Confidentially judging the relationship between an integer and an interval against malicious adversaries and its applications
- Author
-
Ruiling Zhang, Gang Xu, Xin Liu, Neal N. Xiong, and Xiu-Bo Chen
- Subjects
Scheme (programming language) ,Computer Networks and Communications ,business.industry ,Computer science ,Big data ,Interval (mathematics) ,Encryption ,Computer security ,computer.software_genre ,Paillier cryptosystem ,Set (abstract data type) ,business ,computer ,Protocol (object-oriented programming) ,Integer (computer science) ,computer.programming_language - Abstract
With the growing prominence of privacy protection issues in big data, artificial intelligence and blockchain, secure multi-party computation has become a research hotspot. Confidentially computing the set problem is an important branch of secure multi-party computation. The relationship between an integer and an interval is the most basic problem in the study of computing set problems. At present, there are many research solutions for secure multi-party computation against malicious adversaries. In this paper, we analyze some possible malicious attacks in the protocol of determining an integer and an interval relationship under the semi-honest model. By using the Goldwasser–Micali encryption scheme, Paillier encryption algorithm, zero-knowledge proof and cut-choose method, we design the secret judgment protocol of an integer and an interval relationship under the malicious model. Finally, an ideal-practical example proof method is used to prove that the protocol is secure under the malicious model. Compared with the existing schemes, it not only keeps good efficiency, but also can resist the attack of malicious opponents, and the protocol is more fair. The scheme has a wide application prospect.
- Published
- 2021
49. A UAV-Assisted Ubiquitous Trust Communication System in 5G and Beyond Networks
- Author
-
Mingfeng Huang, Neal N. Xiong, Jie Wu, and Anfeng Liu
- Subjects
Data collection ,Computer Networks and Communications ,Computer science ,Reliability (computer networking) ,Mobile broadband ,Hash function ,Communications system ,Computer security ,computer.software_genre ,Data modeling ,Data integrity ,Data verification ,Electrical and Electronic Engineering ,computer - Abstract
UAV-assisted wireless communications facilitate the applications of Internet of Things (IoT), which employ billions of devices to sense and collect data with an on-demand style. However, there are numerous malicious Mobile Data Collectors (MDCs) mixing into the network, stealing or tampering with data, which greatly damages IoT applications. So, it is urgent to build a ubiquitous trust communication system. In this paper, a UAV-assisted Ubiquitous Trust Evaluation (UUTE) framework is proposed, which combines the UAV-assisted global trust evaluation and the historical interaction based local trust evaluation. We first propose a global trust evaluation model for data collection platforms. It can accurately eliminate malicious MDCs and create a clean data collection environment, by dispatching UAVs to collect baseline data to validate the data submitted by MDCs. After that, a local trust evaluation model is proposed to help select credible MDCs for collaborative data collection. By letting UAVs distribute the data verification hash codes to MDCs, the MDCs can verify whether the exchanged data from the interacted MDCs is reliable. Extensive experiments conduct on a real-life dataset demonstrate that our UUTE system outperforms the existing trust evaluation systems in terms of accuracy and cost.
- Published
- 2021
50. Research on strong agile response task scheduling optimization enhancement with optimal resource usage in green cloud computing
- Author
-
Neal N. Xiong, Wanneng Shu, and Ken Cai
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Distributed computing ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Energy consumption ,Virtualization ,computer.software_genre ,Task (project management) ,Scheduling (computing) ,Network congestion ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Throughput (business) ,computer ,Software - Abstract
Virtualization technology provides a new way to improve resource utilization and cloud service throughput. However, the randomness of task arrival, tight coupling between resource load imbalance and node heterogeneity, high computing power, and other factors have hindered the energy consumption optimization and cost reduction objectives of the existing technology. Consequently, task scheduling failure cannot be easily eliminated, and cloud computing performance is decreased dramatically. In this study, a strong agile response task scheduling optimization algorithm is proposed on the basis of the peak energy consumption of data centers and the time span of task scheduling. Agile response optimization techniques are also adopted. From the perspective of task failure rate, the proposed algorithm can be used to investigate the strong agile response optimization model, explore the probability density function of the task request queue overflow, and request a timeout to avoid network congestion. Experimental results indicate that the proposed algorithm can achieve the stability and efficiency of task scheduling and effectively improve the throughput of the cloud computing system.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.