105 results
Search Results
2. Enhancing physical layer security with reconfigurable intelligent surfaces and friendly jamming: A secrecy analysis
- Author
-
Illi, Elmehdi, Qaraqe, Marwa, El Bouanani, Faissal, and Al-Kuwari, Saif
- Published
- 2024
- Full Text
- View/download PDF
3. Throughput maximization in multi-slice cooperative NOMA-based system with underlay D2D communications
- Author
-
Amer, Asmaa, Hoteit, Sahar, and Othman, Jalel Ben
- Published
- 2024
- Full Text
- View/download PDF
4. Enabling simulation services for digital twins of 5G/B5G mobile networks
- Author
-
Nardini, Giovanni and Stea, Giovanni
- Published
- 2024
- Full Text
- View/download PDF
5. NeXT: Architecture, prototyping and measurement of a software-defined testing framework for integrated RF network simulation, experimentation and optimization
- Author
-
Hu, Jiangqi, Zhao, Zhiyuan, McManus, Maxwell, Moorthy, Sabarish Krishna, Cui, Yuqing, Mastronarde, Nicholas, Bentley, Elizabeth Serena, Medley, Michael, and Guan, Zhangyu
- Published
- 2023
- Full Text
- View/download PDF
6. RUMP: Resource Usage Multi-Step Prediction in Extreme Edge Computing
- Author
-
Kain, Ruslan, Elsayed, Sara A., Chen, Yuanzhu, and Hassanein, Hossam S.
- Published
- 2023
- Full Text
- View/download PDF
7. A study on 5G performance and fast conditional handover for public transit systems
- Author
-
Fiandrino, Claudio, Martínez-Villanueva, David Juárez, and Widmer, Joerg
- Published
- 2023
- Full Text
- View/download PDF
8. Opportunistic airborne virtual network infrastructure for urban wireless networks
- Author
-
Hardes, Tobias and Sommer, Christoph
- Published
- 2023
- Full Text
- View/download PDF
9. Circular Time Shift Modulation for robust underwater acoustic communications in doubly spread channels
- Author
-
Qi, Zhuoran and Pompili, Dario
- Published
- 2023
- Full Text
- View/download PDF
10. A security-friendly privacy-preserving solution for federated learning
- Author
-
Karakoç, Ferhat, Karaçay, Leyli, Çomak De Cnudde, Pinar, Gülen, Utku, Fuladi, Ramin, and Soykan, Elif Ustundag
- Published
- 2023
- Full Text
- View/download PDF
11. Locality-aware deployment of application microservices for multi-domain fog computing
- Author
-
Faticanti, Francescomaria, Savi, Marco, De Pellegrini, Francesco, and Siracusa, Domenico
- Published
- 2023
- Full Text
- View/download PDF
12. A temporal–spatial analysis on the socioeconomic development of rural villages in Thailand and Vietnam based on satellite image data
- Author
-
Wölk, Fabian, Yuan, Tingting, Kis-Katos, Krisztina, and Fu, Xiaoming
- Published
- 2023
- Full Text
- View/download PDF
13. Asymmetric Differential Routing for low orbit satellite constellations
- Author
-
Markovitz, Oren and Segal, Michael
- Published
- 2022
- Full Text
- View/download PDF
14. Scalable name identifier lookup for Industrial Internet
- Author
-
Wang, Yunmin, Huang, Ting, Wei, Guohua, Li, Hui, and Zhang, Huayu
- Published
- 2022
- Full Text
- View/download PDF
15. A comprehensive review on variants of SARS-CoVs-2: Challenges, solutions and open issues
- Author
-
null Deepanshi, Ishan Budhiraja, Deepak Garg, Neeraj Kumar, and Rohit Sharma
- Subjects
Computer Networks and Communications - Abstract
SARS-CoV-2 is an infected disease caused by one of the variants of Coronavirus which emerged in December 2019. It is declared a pandemic by WHO in March 2020. COVID-19 outbreak has put the world on a halt and is a major threat to the public health system. It has shattered the world with its effects on different areas as the pandemic hit the world in a number of waves with different variants and mutations. Each variant and mutation have different transmission and infection rates in the human population. More than 609 million people have tested positive and more than 6.5 million people have died due to this disease as per 14th September 2022. Despite of numerous efforts, precautions and vaccination the infection has grown rapidly in the world. In this paper, we aim to give a holistic overview of COVID-19 its variants, game theory perspective, effects on the different social and economic areas, diagnostic advancements, treatment methods. A taxonomy is made for the proper insight of the work demonstrated in the paper. Finally, we discuss the open issues associated with COVID-19 in different fields and futuristic research trends in the area. The main aim of the paper is to provide comprehensive literature that covers all the areas and provide an expert understanding of the COVID-19 techniques and potentially be further utilized to combat the outbreak of COVID-19.
- Published
- 2023
16. Discovery privacy threats via device de-anonymization in LoRaWAN
- Author
-
Francesca Cuomo, Patrizio Pisani, Giorgio Pillon, Pietro Spadaccino, Domenico Garlisi, Spadaccino P., Garlisi D., Cuomo F., Pillon G., and Pisani P.
- Subjects
Information privacy ,IoT ,De-anonymization ,de-anonymizations ,Computer science ,Emerging technologies ,Computer Networks and Communications ,Internet of Things ,Device identification ,computer.software_genre ,Computer security ,privacy ,LoRa ,Security and privacy ,Unique identifier ,LoRaWAN ,Security ,Lorawan ,Application server ,Network packet ,Probabilistic logic ,Identification (information) ,internet of things ,lora ,lorawan ,security ,network optimization ,computer ,Network optimization - Abstract
LoRaWAN (Long Range WAN) is one of the well-known emerging technologies for the Internet of Things (IoT). Many IoT applications involve simple devices that transmit their data toward network gateways or access points that, in their turn, redirect data to application servers. While several security issues have been addressed in the LoRaWAN specification v1.1, there are still some aspects that may undermine privacy and security of the interconnected IoT devices. In this paper, we tackle a privacy aspect related to LoRaWAN device identity. The proposed approach, by monitoring the network traffic in LoRaWAN, is able to derive, in a probabilistic way, the unique identifier of the IoT device from the temporal address assigned by the network. In other words, the method identifies the relationship between the LoRaWAN DevAddress and the device manufacturer DevEUI. The proposed approach, named DEVIL (DEVice Identification and privacy Leakage), is based on temporal patterns arising in the packets transmissions. The paper presents also a detailed study of two real datasets: i) one derived by IoT devices interconnected to a prominent network operator in Italy; ii) one taken from the literature (the LoED dataset in Bhatia et al. (2020)). DEVIL is evaluated on the first dataset while the second is analyzed to support the hypothesis under the DEVIL operation. The results of our analysis, compared with other literature approaches, show how device identification through DEVIL can expose IoT devices to privacy leakage. Finally, the paper also provides some guidelines to mitigate the user re-identification threats.
- Published
- 2022
17. Graph-based deep learning for communication networks: A survey
- Author
-
Weiwei Jiang
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Science - Networking and Internet Architecture ,Computer Science - Machine Learning ,Computer Networks and Communications ,Machine Learning (cs.LG) - Abstract
Communication networks are important infrastructures in contemporary society. There are still many challenges that are not fully solved and new solutions are proposed continuously in this active research area. In recent years, to model the network topology, graph-based deep learning has achieved the state-of-the-art performance in a series of problems in communication networks. In this survey, we review the rapidly growing body of research using different graph-based deep learning models, e.g. graph convolutional and graph attention networks, in various problems from different types of communication networks, e.g. wireless networks, wired networks, and software defined networks. We also present a well-organized list of the problem and solution for each study and identify future research directions. To the best of our knowledge, this paper is the first survey that focuses on the application of graph-based deep learning methods in communication networks involving both wired and wireless scenarios. To track the follow-up research, a public GitHub repository is created, where the relevant papers will be updated continuously., Accepted by Elsevier Computer Communications. Github link: https://github.com/jwwthu/GNN-Communication-Networks
- Published
- 2022
18. Risk model of financial supply chain of Internet of Things enterprises: A research based on convolutional neural network
- Author
-
Xu Chen and Jingfu Lu
- Subjects
Finance ,Set (abstract data type) ,Structure (mathematical logic) ,Tree (data structure) ,Identification (information) ,Computer Networks and Communications ,business.industry ,Computer science ,Process (engineering) ,Supply chain ,Path (graph theory) ,business ,Convolutional neural network - Abstract
The emergence of the financial supply chain provides assistance for small, medium and micro enterprises in the supply chain through a secured credit model based on real trade. Moreover, in the multi-level structure of the financial supply chain of the Internet of Things enterprise, there are information barriers and information islands. Besides, data is often not transmitted smoothly, and the intermediate offline process is complicated. What is worse, the efficiency is low, and the verification cost is high. Therefore, based on supply chain finance, an evolutionary risk model is constructed in this paper. Firstly, the income matrix of the regulatory risk model is established, and the convolutional neural network used will pool the training data to the maximum and set the local corresponding normalization layer. With the help of the evolutionary risk theory, the dynamic equation of the financial supply chain is obtained, forming the dynamic path and abnormal model of strategy selection. Then, a compact pattern tree is added to the knowledge granularity method to mine data anomalies. Finally, an experimental platform is built to verify the effectiveness of the method proposed in this paper, and experiments are performed on the accuracy of model evolution conditions, abnormal data identification, and abnormal numerical examples. The experimental results prove that the algorithm in this paper is consistent with the set parameters, and the effect is significantly higher than other comparison methods. The experimental mining time and the comparison method are shortened by 6 ∼ 13S. The research results obtained from this paper solve the problem that the decision-making of supply chain finance and the supervision and review of supply chain enterprise are complex, which improves the characteristics identification of supply chain platform, and provides reference suggestions for financial institutions and supply chain platforms.
- Published
- 2022
19. Big data-driven scheduling optimization algorithm for Cyber–Physical Systems based on a cloud platform
- Author
-
Lizhou Wang and Chao Niu
- Subjects
Correctness ,Computer Networks and Communications ,Computer science ,business.industry ,Server ,Distributed computing ,Cyber-physical system ,Cloud computing ,Load balancing (computing) ,Directed acyclic graph ,business ,Critical path method ,Scheduling (computing) - Abstract
In this paper, we study big data-driven Cyber–Physical Systems (CPS) through cloud platforms and design scheduling optimization algorithms to improve the efficiency of the system. A task scheduling scheme for large-scale factory access under cloud–edge collaborative computing architecture is proposed. The method firstly merges the directed acyclic graphs on cloud-side servers and edge-side servers; secondly, divide the tasks using a critical path-based partitioning strategy to effectively improve the allocation accuracy; then achieves load balancing through reasonable processor allocation, and finally compares and analyses the proposed task scheduling algorithm through simulation experiments. The experimental system is thoroughly analysed, hierarchically designed, and modelled, simulated, and the experimental data analysed and compared with related methods. The experimental results prove the effectiveness and correctness of the worst-case execution time analysis method and the idea of big data-driven CPS proposed in this paper and show that big data knowledge can help improve the accuracy of worst-case execution time analysis. This paper implements a big data-driven scheduling optimization algorithm for Cyber–Physical Systems based on a cloud platform, which improves the accuracy and efficiency of the algorithm by about 15% compared to other related studies.
- Published
- 2022
20. A survey on improving the wireless communication with adaptive antenna selection by intelligent method
- Author
-
Chin-Feng Lai and ChienHsiang Wu
- Subjects
Beamforming ,Feature data ,Computer Networks and Communications ,Wireless network ,business.industry ,Computer science ,Phased array ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Field (computer science) ,Transmission (telecommunications) ,Computer architecture ,Wireless ,Antenna (radio) ,business - Abstract
Transmission applications in wireless networks have brought unprecedented demands. The demand for high-performance wireless transmission is increasing day by day. Antenna technology is an indispensable part of the development of wireless communication. One potential solution is to resort t intelligent learning techniques to help breakthroughs in the limited antenna technical field. It is based on an adaptive antenna using intelligent learning. It has laid the foundation for signal strength adjustment to enhance wireless transmission efficiency. This paper evaluates the most advanced literature and techniques. A comprehensive description from different perspectives covers several adaptive antenna structures, including diversity antennas, phased array antennas, and beamforming specific learning methods. After that, this paper divides it into different categories, from intelligent learning algorithms and feature data perspectives in a different light to analyze and discuss. This article expects to help readers understand the latest intelligent technology based on adaptive antennas. Further, it sheds novel light on future research directions to meet the development needs of adaptive antennas for future wireless networks.
- Published
- 2022
21. Privacy information verification of homomorphic algorithm for aggregated data based on fog layer structure
- Author
-
Xin Li, Zhenmin Qiao, and Ruitao Liu
- Subjects
Upload ,Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Ciphertext ,Process (computing) ,Homomorphic encryption ,Access control ,Encryption ,business ,Algorithm ,Access structure - Abstract
Becoming a vital position in the interconnection industry of the Internet of Things, IIoT has promoted the conversion of traditional industries to intelligent industries. However, it is necessary further to solve the security and privacy threats in IIoT while reducing communication bandwidth and computing resource consumption to carry out research. In order to solve the problem of user privacy leakage caused by the access structure, a fog computing-oriented access control structure hiding scheme is proposed in the paper. The Paillier homomorphic filter algorithm is introduced in the fog calculation. The Paillier homomorphic algorithm hides the mapping function in the access structure during the data upload process, achieving the effect of completely hiding the access structure. Moreover, the ciphertext is stored separately from the access structure, and the Paillier homomorphic algorithm is run through the fog node during the decryption process to detect whether the attributes of the data user exist in the hidden access structure. If it exists, reconstruct the mapping function and send it to the data user, who then downloads and decrypts the ciphertext. If it does not exist, it means that the data user does not meet the access conditions of the data, and there is no need to download and decrypt the ciphertext. Meanwhile, a simulation experiment platform is constructed in the paper to distinguish the performance of the method proposed in the paper from other similar methods, proving the efficiency and practicability of the scheme.
- Published
- 2022
22. Construction of multi-modal perception model of communicative robot in non-structural cyber physical system environment based on optimized BT-SVM model
- Author
-
Hui Zeng and Jiaqi Luo
- Subjects
Binary tree ,Computer Networks and Communications ,business.industry ,Computer science ,Point cloud ,Mobile robot ,Support vector machine ,Robustness (computer science) ,Robot ,Computer vision ,Artificial intelligence ,business ,Intelligent control ,Computer technology - Abstract
With the rapid development of intelligent control technology, computer technology, bionics, artificial intelligence and other disciplines, more and more attention has been paid to the research of intelligent mobile robot technology, and autonomous positioning is the basis for mobile robots to conduct autonomous navigation and exploration research. Sensors assist each other to provide a wealth of perception information about the internal state of the robot and the external surrounding environment. This paper proposes a method for optimizing the Support Vector Machine (SVM) multi-classifier with a binary tree structure, which improves the accuracy of multi-modal tactile signal recognition. The improved particle swarm clustering algorithm is used to optimize the binary tree structure, reduce the error accumulation of the binary tree structure SVM multi-classifier, and further improve the accuracy of multi-modal tactile signal recognition. The effect of the method in this paper is verified by robot grasping experiments. The results show that the use of multi-modal information of two-dimensional images and three-dimensional point cloud images can effectively identify and locate target objects of different shapes. Compared with the processing method of two-dimensional or point cloud monomodal image information, the positioning error can be reduced by 54.8%, and the direction error can be reduced by 50.8%, which has better robustness and accuracy. The simulation results show that the improved PSOBT-SVM model has the best classification effect for artificial features, PCA features and spatio-temporal correlation features. The improved PSOBT-SVM optimizes the classification accuracy without changing the number of SVM classifiers, and proves its accuracy in classifying multimodal tactile signals.
- Published
- 2022
23. A two-tier Blockchain framework to increase protection and autonomy of smart objects in the IoT
- Author
-
Antonino Nocera, Luca Virgili, Domenico Ursino, Enrico Corradini, and Serena Nicolazzo
- Subjects
Correctness ,Blockchain ,Computer Networks and Communications ,Computer science ,Smart objects ,media_common.quotation_subject ,Computer security model ,Computer security ,computer.software_genre ,Object (computer science) ,Everyday life ,computer ,Autonomy ,media_common ,Reputation - Abstract
In recent years, the Internet of Things paradigm has become pervasive in everyday life attracting the interest of the research community. Two of the most important challenges to be addressed concern the protection of smart objects and the need to guarantee them a great autonomy. For this purpose, the definition of trust and reputation mechanisms appears crucial. At the same time, several researchers have started to adopt a common distributed ledger, such as a Blockchain, for building advanced solutions in the IoT. However, due to the high dimensionality of this problem, enabling a trust and reputation mechanism by leveraging a Blockchain-based technology could give rise to several performance issues in the IoT. In this paper, we propose a two-tier Blockchain framework to increase the security and autonomy of smart objects in the IoT by implementing a trust-based protection mechanism. In this framework, smart objects are suitably grouped into communities. To reduce the complexity of the solution, the first-tier Blockchain is local and is used only to record probing transactions performed to evaluate the trust of an object in another one of the same community or of a different community. Periodically, after a time window, these transactions are aggregated and the obtained values are stored in the second-tier Blockchain. Specifically, stored values are the reputation of each object inside its community and the trust of each community in the other ones of the framework. In this paper, we describe in detail our framework, its behavior, the security model associated with it and the tests carried out to evaluate its correctness and performance.
- Published
- 2022
24. SAAS parallel task scheduling based on cloud service flow load algorithm
- Author
-
Jian Zhu, Shi Ying, and Qian Li
- Subjects
Resource (project management) ,Job shop scheduling ,Computer Networks and Communications ,Computer science ,business.industry ,Software as a service ,Resource management ,Cloud computing ,Service provider ,business ,Algorithm ,Task (project management) ,Scheduling (computing) - Abstract
In cloud platform applications, the user’s goal is to obtain high-quality application services, while the service provider’s goal is to obtain revenue by performing the tasks submitted by the user. The platform built by the service provider’s application resources needs to improve the mapping between service requests and resources to achieve higher value. Through the current situation of resource management in the cloud environment, it is found that many task scheduling and resource allocation algorithms are still affected by factors such as the diversity, dynamics, and multiple constraints of resources and tasks. This paper focuses on Software as a Service (SaaS) applications’ task scheduling and resource configuration in a dynamic and uncertain cloud environment. It is a challenging online scheduling problem to automatically and intelligently allocate user task requests that continually reach SaaS applications to appropriate resources for execution. To this end, a real-time task scheduling method based on deep reinforcement learning is proposed, which automatically and intelligently allocates user task requests that continually reach SaaS applications to appropriate resources for execution. In this way, the limited virtual machine resources rented by SaaS providers can be used in a balanced and efficient manner. In the experiment, by comparing with other five task scheduling algorithms, it is proved that the algorithm proposed in this paper not only improves the execution efficiency of better deploying workflow in IaaS public cloud, but also makes the resources provided by SaaS are used in a balanced and efficient manner.
- Published
- 2022
25. Analysis of physical health risk dynamic evaluation system based on sports network technology
- Author
-
Lianzhen Chen and Hua Zhu
- Subjects
Sustainable development ,Evaluation system ,Risk analysis (engineering) ,Computer Networks and Communications ,Computer science ,Posture recognition ,Physical health ,Human body ,Construct (philosophy) ,Dynamic feature ,Risk assessment - Abstract
The people are the pillar of national construction and the driving force of social sustainable development. Therefore, people must develop morality, intelligence and body in an all-round way. The physical health risk assessment is of great significance to the improvement of the physical health of the public. Based on the sports network technology, this paper improves the traditional human posture recognition algorithm, and combines machine learning to construct the human body dynamic feature recognition method. Moreover, this paper combines actual needs to construct a physical health risk dynamic evaluation system based on sports network technology, which can recognize sports through pose recognition and combine dynamic monitoring technology to detect daily diet and living habits. In addition, this paper constructs the network structure of this paper system and designs its functional modules according to actual needs. Finally, this paper designs experiments to verify the performance of the model constructed in this paper. The research shows that the system constructed in this paper meets actual needs and can provide theoretical references for subsequent related research.
- Published
- 2022
26. Mask-RCNN with spatial attention for pedestrian segmentation in cyber–physical systems
- Author
-
Zhao Qiu and Lin Yuan
- Subjects
Artificial neural network ,Computer Networks and Communications ,Computer science ,business.industry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cyber-physical system ,Pedestrian ,Residual ,Machine learning ,computer.software_genre ,Segmentation ,Artificial intelligence ,Transfer of learning ,Focus (optics) ,business ,computer - Abstract
With the application of industrial cyber–physical systems in various fields such as transportation systems, smart cities, and medical systems, pedestrian scenarios are becoming more and more complex, which brings difficulties to pedestrian segmentation. The difficulty of pedestrian segmentation lies in the scene’s complexity where the pedestrian is located, including the pedestrian’s shooting angle, light, and obstructions, which makes it difficult to distinguish accurately. This paper proposes an S-Mask-RCNN network that integrates spatial attention mechanisms for pedestrian segmentation. Mask-RCNN uses residual neural networks in the feature extraction network, and the effect of model feature extraction is not ideal. Based on transfer learning, a spatial attention mechanism is introduced to focus more spatially on areas that need attention. The force mechanism focuses more on the areas that need attention in space. Experiments show that the S-Mask-RCNN method proposed in this paper has better performance than traditional Mask-RCNN in pedestrian segmentation. Experiments show that the S-Mask-RCNN method proposed in this paper has better performance than traditional Mask-RCNN in pedestrian segmentation, also can provide more comprehensive and practical information for the construction of cyber–physical systems.
- Published
- 2021
27. Classification of flower image based on attention mechanism and multi-loss attention network
- Author
-
Huihui Su, Jinghua Wen, and Mei Zhang
- Subjects
Channel (digital image) ,Computer Networks and Communications ,business.industry ,Computer science ,Pattern recognition ,Image (mathematics) ,Data set ,Feature (computer vision) ,Softmax function ,Embedding ,Artificial intelligence ,Layer (object-oriented design) ,business ,Realization (systems) - Abstract
The accurate classification of flower images is the prerequisite for flower plant management to artificial intelligence, how to use the machine to classify flowers automatically is the current hot issue to be solved. This paper first introduced the principle of attention mechanism and realization of spatial attention mechanism and channel attention mechanism, and then designed the embedding of the spatial attention module and channel attention model in Xception structure based on Xception. Final, the network was optimized by jointing Triplet Loss and Softmax Loss in the network loss layer, to obtain a feature embedding space with high intra-class compactness and inter-class separation. This paper was experimented on two flower image data sets (Oxford 17 flowers and Oxford 102 flowers), the results show that the MLSAN, MLCAN, MLCSAN model proposed in this paper were 0.39%, 0.50%, and 0.72% higher on the Oxford 17 flowers data set and 0.52%, 0.63% and 0.85% higher on the data set Oxford 102 flowers data set.
- Published
- 2021
28. A security-friendly privacy-preserving solution for federated learning
- Author
-
Ferhat Karakoç, Leyli Karaçay, Pinar Çomak De Cnudde, Utku Gülen, Ramin Fuladi, and Elif Ustundag Soykan
- Subjects
Security attacks ,Privacy ,Multi-hop communication ,Computer Networks and Communications ,Federated learning ,Poisoning attacks - Abstract
Federated learning is a privacy-aware collaborative machine learning method where the clients collaborate on constructing a global model by performing local model training using their training data and sending the local model updates to the server. Although it enhances privacy by letting the clients collaborate without sharing their training data, it is still prone to sophisticated privacy attacks because of possible information leakage from the local model updates sent to the server. To prevent such attacks, generally secure aggregation protocols are proposed so that the server will not be able to access the individual local model updates butthe aggregated result. However, such secure aggregation approaches may not allow the execution of security mechanisms against some security attacks to model training, such as poisoning and backdoor attacks, because the server cannot access the individual local model updates and; therefore, cannot analyze them to detect anomalies resulting from these attacks. Thus, solutions that satisfy privacy and security at the same time or new privacy-preserving solutions that allow the server to execute some analysis on the local model updates without violating privacy are needed for federated learning. In this paper, we introduce a novel security friendly privacy solution for federated learning based on multi-hop communication to hide clients’ identities. Our solution ensures that the forwardee clients in the path between the source client and the server cannot execute malicious activities by altering model updates and contributing to the global model construction with more than one local model update in one FL round. We then propose two different approaches to make the solution also robust against possible malicious packet drop behaviors by the forwardee clients.
- Published
- 2023
29. Content privacy enforcement models in decentralized online social networks: State of play, solutions, limitations, and future directions
- Author
-
Andrea De Salve, Paolo Mori, Laura Ricci, and Roberto Di Pietro
- Subjects
Social and Information Networks (cs.SI) ,FOS: Computer and information sciences ,Computer Networks and Communications ,Computer Science - Social and Information Networks - Abstract
In recent years, Decentralized Online Social Networks (DOSNs) have been attracting the attention of many users because they reduce the risk of censorship, surveillance, and information leakage from the service provider. In contrast to the most popular Online Social Networks, which are based on centralized architectures (e.g., Facebook, Twitter, or Instagram), DOSNs are not based on a single service provider acting as a central authority. Indeed, the contents that are published on DOSNs are stored on the devices made available by their users, which cooperate to execute the tasks needed to provide the service. To continuously guarantee their availability, the contents published by a user could be stored on the devices of other users, simply because they are online when required. Consequently, such contents must be properly protected by the DOSN infrastructure, in order to ensure that they can be really accessed only by users who have the permission of the publishers. As a consequence, DOSNs require efficient solutions for protecting the privacy of the contents published by each user with respect to the other users of the social network. In this paper, we investigate and compare the principal content privacy enforcement models adopted by current DOSNs evaluating their suitability to support different types of privacy policies based on user groups. Such evaluation is carried out by implementing several models and comparing their performance for the typical operations performed on groups, i.e., content publish, user join and leave. Further, we also highlight the limitations of current approaches and show future research directions. This contribution, other than being interesting on its own, provides a blueprint for researchers and practitioners interested in implementing DOSNs, and also highlights a few open research directions.
- Published
- 2023
30. Explainable machine learning for performance anomaly detection and classification in mobile networks
- Author
-
Juan M. Ramírez, Fernando Díez, Pablo Rojo, Vincenzo Mancuso, and Antonio Fernández-Anta
- Subjects
mobile networks ,Computer Networks and Communications ,decision tree classifiers ,data cleaning ,anomaly detection and classification ,explainable machine learning - Abstract
Mobile communication providers continuously collect many parameters, statistics, and key performance indicators (KPIs) with the goal of identifying operation scenarios that can affect the quality of Internet-based services. In this regard, anomaly detection and classification in mobile networks have become challenging tasks due to both the huge number of involved variables and the unknown distributions exhibited by input features. This paper introduces an unsupervised methodology based on both a data-cleaning strategy and explainable machine learning models to detect and classify performance anomalies in mobile networks. Specifically, this methodology dubbed explainable machine learning for anomaly detection and classification (XMLAD) aims at identifying features and operation scenarios characterizing performance anomalies without resorting to parameter tuning. To this end, this approach includes a data cleaning stage that extracts and removes outliers from experiments and features to train the anomaly detection engine with the cleanest possible dataset. Moreover, the methodology considers the differences between discretized values of the target KPI and labels predicted by the anomaly detection engine to build the anomaly classification engine which identifies features and thresholds that could cause performance anomalies. The proposed methodology incorporates two decision tree classifiers to build explainable models of anomaly detection and classification engines whose decision structures recognize features and thresholds describing both normal behaviors and performance anomalies. We evaluate the XMLAD methodology on real datasets captured by operational tests in commercial networks. In addition, we present a testbed that generates synthetic data using a known TCP throughput model to assess the accuracy of the proposed approach. Spanish State Research Agency - Spanish Ministry of Science and Innovation Ministry of Economic Affairs and Digital Transformation, European Union NextGeneration-EU Department of Education and Research of the Regional Government of Madrid, through the 2018 R&D technology program for research groups, co-financed by the Operational Programs of the European Social Fund (ESF) and the European Regional Development Fund (ERDF) Nokia Spain TRUE inpress
- Published
- 2023
31. Road crash risk prediction during COVID-19 for flash crowd traffic prevention: The case of Los Angeles
- Author
-
Junbo Wang, Xiusong Yang, Songcan Yu, Qing Yuan, Zhuotao Lian, and Qinglin Yang
- Subjects
Computer Networks and Communications - Abstract
Road crashes are a major problem for traffic safety management, which usually causes flash crowd traffic with a profound influence on traffic management and communication systems. In 2020, the sudden outbreak of the novel coronavirus disease (COVID-19) pandemic led to significant changes in road traffic conditions. In this paper, by analyzing crash data from 2016 to 2020 and new COVID-19 case data in 2020, we find that the average crash severity and crash deaths during this period (a rapid increase of new COVID-19 cases in 2020) are higher than those in previous four years. Hence, it is necessary to exploit a novel road crash risk prediction model for such an emergency. We propose a novel data-adaptive fatigue focal loss (DA-FFL) method by fusing fatigue factors to establish a road crash risk prediction model under the scenario of large-scale emergencies. Finally, the experimental results demonstrate that DA-FFL performs better than the other typical methods in terms of area under curve (AUC) and false alarm rate (FAR) for imbalanced data. Furthermore, DA-FFL has better prediction performance in convolutional neural networks-long short-term memory (CNN-LSTM).
- Published
- 2023
32. Cellular Network Capacity and Coverage Enhancement with MDT Data and Deep Reinforcement Learning
- Author
-
Marco Skocaj, Lorenzo M. Amorosa, Giorgio Ghinamo, Giuliano Muratore, Davide Micheli, Flavio Zabini, and Roberto Verdone
- Subjects
Computer Science - Networking and Internet Architecture ,Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Networks and Communications ,Machine Learning (cs.LG) - Abstract
Recent years witnessed a remarkable increase in the availability of data and computing resources in communication networks. This contributed to the rise of data-driven over model-driven algorithms for network automation. This paper investigates a Minimization of Drive Tests (MDT)-driven Deep Reinforcement Learning (DRL) algorithm to optimize coverage and capacity by tuning antennas tilts on a cluster of cells from TIM's cellular network. We jointly utilize MDT data, electromagnetic simulations, and network Key Performance indicators (KPIs) to define a simulated network environment for the training of a Deep Q-Network (DQN) agent. Some tweaks have been introduced to the classical DQN formulation to improve the agent's sample efficiency, stability, and performance. In particular, a custom exploration policy is designed to introduce soft constraints at training time. Results show that the proposed algorithm outperforms baseline approaches like DQN and best-fist search in terms of long-term reward and sample efficiency. Our results indicate that MDT-driven approaches constitute a valuable tool for autonomous coverage and capacity optimization of mobile radio networks., Comment: 15 pages, 17 figures
- Published
- 2022
33. Neural language models for network configuration: Opportunities and reality check
- Author
-
Zied Ben Houidi and Dario Rossi
- Subjects
Computer Science - Networking and Internet Architecture ,Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Networks and Communications - Abstract
Boosted by deep learning, natural language processing (NLP) techniques have recently seen spectacular progress, mainly fueled by breakthroughs both in representation learning with word embeddings (e.g. word2vec) as well as novel architectures (e.g. transformers). This success quickly invited researchers to explore the use of NLP techniques to other fields, such as computer programming languages, with the promise to automate tasks in software programming (bug detection, code synthesis, code repair, cross language translation etc.). By extension, NLP has potential for application to network configuration languages as well, for instance considering tasks such as network configuration verification, synthesis, and cross-vendor translation. In this paper, we survey recent advances in deep learning applied to programming languages, for the purpose of code verification, synthesis and translation: in particularly, we review their training requirements and expected performance, and qualitatively assess whether similar techniques can benefit corresponding use-cases in networking.
- Published
- 2022
34. Timely and sustainable: Utilising correlation in status updates of battery-powered and energy-harvesting sensors using Deep Reinforcement Learning
- Author
-
Jernej Hribar, Luiz A. DaSilva, Sheng Zhou, Zhiyuan Jiang, and Ivana Dusparic
- Subjects
Deep reinforcement learning ,Energy efficiency ,Energy harvesting ,Computer Networks and Communications ,Age of Information ,Internet of Things ,Deep Deterministic Policy Gradient - Abstract
In a system with energy-constrained sensors, each transmitted observation comes at a price. The price is the energy the sensor expends to obtain and send a new measurement. The system has to ensure that sensors' updates are timely, i.e., their updates represent the observed phenomenon accurately, enabling services to make informed decisions based on the information provided. If there are multiple sensors observing the same physical phenomenon, it is likely that their measurements are correlated in time and space. To take advantage of this correlation to reduce the energy use of sensors, in this paper we consider a system in which a gateway sets the intervals at which each sensor broadcasts its readings. We consider the presence of battery-powered sensors as well as sensors that rely on Energy Harvesting (EH) to replenish their energy. We propose a Deep Reinforcement Learning (DRL)-based scheduling mechanism that learns the appropriate update interval for each sensor, by considering the timeliness of the information collected measured through the Age of Information (AoI) metric, the spatial and temporal correlation between readings, and the energy capabilities of each sensor. We show that our proposed scheduler can achieve near-optimal performance in terms of the expected network lifetime. European Regional Development Fund through the SFI Research Centres Programme [13/RC/2077_P2 SFI CONNECT]; SFI-NSFC Partnership Programme [17/NSFC/5224]; Commonwealth Cyber Initiative at Virginia Tech Published version This work was funded in part by the European Regional Development Fund through the SFI Research Centres Programme under Grant No. 13/RC/2077_P2 SFI CONNECT and by the SFI-NSFC Partnership Programme Grant Number 17/NSFC/5224. It is also supported by the Commonwealth Cyber Initiative at Virginia Tech.
- Published
- 2022
35. FAM: A frame aggregation based method to infer the load level in IEEE 802.11 networks
- Author
-
Nour El Houda Bouzouita, Anthony Busson, Herve Rivano, Holistic Wireless Networks (hownet), Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS), AlGorithmes et Optimisation pour Réseaux Autonomes (AGORA), CITI Centre of Innovation in Telecommunications and Integration of services (CITI), Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Inria Lyon, and Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
[INFO.INFO-NI]Computer Science [cs]/Networking and Internet Architecture [cs.NI] ,Computer Networks and Communications - Abstract
International audience; In many environments, connected devices are exposed to and must choose between multiple Wi-Fi networks. However, the procedure for selecting an access point is still based on simple criteria that consider the device to be unique in the network. In particular, the network load is not taken into account even though it is a key parameter for the quality of service and experience. In this paper, we investigate how an unmodified vanilla device could estimate the load of a network in the user space with no interventions from the access points. In this regard, we propose a novel and practical method, FAM (Frame Aggregation based Method). It leverages the frame aggregation mechanism introduced in recent IEEE 802.11 amendments to estimate the network load through its channel busy time fraction. FAM combines an active probing technique to measure the actual packet aggregation and Markovian models that provide the expected rate as a function of the volume and nature of the traffic on the network. We validate the effectiveness of FAM against both ns-3 simulations and test-bed experiments under several scenarios. Results show that our method FAM is able to infer the network load with a granularity based on six different levels of network loads for the considered scenarios.
- Published
- 2022
36. NetSentry: A deep learning approach to detecting incipient large-scale network attacks
- Author
-
Haoyu Liu and Paul Patras
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Science - Networking and Internet Architecture ,Computer Science - Machine Learning ,Computer Science - Cryptography and Security ,Network-based intrusion detection system ,Computer Networks and Communications ,Deep learning ,Feature augmentation ,Cryptography and Security (cs.CR) ,Machine Learning (cs.LG) - Abstract
Machine Learning (ML) techniques are increasingly adopted to tackle ever-evolving high-profile network attacks, including Distributed Denial of Service (DDoS), botnet, and ransomware, due to their unique ability to extract complex patterns hidden in data streams. These approaches are however routinely validated with data collected in the same environment, and their performance degrades when deployed in different network topologies and/or applied on previously unseen traffic, as we uncover. This suggests malicious/benign behaviors are largely learned superficially and ML-based Network Intrusion Detection Systems (NIDS) need revisiting, to be effective in practice. In this paper we dive into the mechanics of large-scale network attacks, with a view to understanding how to use ML for Network Intrusion Detection (NID) in a principled way. We reveal that, although cyberattacks vary significantly in terms of payloads, vectors and targets, their early stages, which are critical to successful attack outcomes, share many similarities and exhibit important temporal correlations. Therefore, we treat NID as a time-sensitive task and propose NetSentry, perhaps the first of its kind NIDS that builds on Bidirectional Asymmetric LSTM (Bi-ALSTM), an original ensemble of sequential neural models, to detect network threats before they spread. We cross-evaluate NetSentry using two practical datasets, training on one and testing on the other, and demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3× higher rates of detecting attacks such as Cross-Site Scripting (XSS) and web bruteforce. Further, we put forward a novel data augmentation technique that boosts the generalization abilities of a broad range of supervised deep learning algorithms, leading to average F1 score gains above 35%. Lastly, we shed light on the feasibility of deploying NetSentry in operational networks, demonstrating affordable computational overhead and robustness to evasion attacks.
- Published
- 2022
37. Autonomous flying IoT: A synergy of machine learning, digital elevation, and 3D structure change detection
- Author
-
Faris A. Almalki and Marios C. Angelides
- Subjects
remote sensing ,machine learning ,aerial imaging ,Computer Networks and Communications ,digital elevation model ,unmanned aerial vehicles ,3D structure change detection model ,internet of things - Abstract
Copyright © 2022 The Author(s). The research work presented in this paper has been funded by a national research project whose aims are to enable an Unmanned Aerial Vehicle (UAV) to fly autonomously with the use of a Digital Elevation Model (DEM) of the target area and to detect terrain changes with the use of a 3D Structure Change Detection Model (3D SCDM). A Convolutional Neural Network (CNN) works with both models in training the UAV in autonomous flying and in detecting terrain changes. The usability of such an autonomous flying IoT is demonstrated through its deployment in the search for water resources in areas where a satellite would not normally be able to retrieve images, e.g., inside gorges, ravines, or caves. Our experiment results show that it can detect water flows by considering different surface shapes such as standing water polygons, watersheds, water channel incisions, and watershed delineations with a 99.6% level of accuracy. This work was supported by the Food; Taif University through the research project TURSP-2020/265.
- Published
- 2022
38. Intelligent deep fusion network for urban traffic flow anomaly identification
- Author
-
Youcef Djenouri, Asma Belhadi, Hsing-Chung Chen, and Jerry Chun-Wei Lin
- Subjects
Decomposition ,Computer Networks and Communications ,Urban traffic flow data ,Convolution neural network - Abstract
This paper presents a novel deep learning architecture for identifying outliers in the context of intelligent transportation systems. The use of a convolutional neural network with an efficient decomposition strategy is explored to find the anomalous behavior of urban traffic flow data. The urban traffic flow data set is decomposed into similar clusters, each containing homogeneous data. The convolutional neural network is used for each data cluster. In this way, different models are trained, each learned from highly correlated data. A merging strategy is finally used to fuse the results of the obtained models. To validate the performance of the proposed framework, intensive experiments were conducted on urban traffic flow data. The results show that our system outperforms the competition on several accuracy criteria.
- Published
- 2022
39. Time synchronization enhancements in wireless networks with ultra wide band communications
- Author
-
Juan J. Pérez-Solano, Santiago Felici-Castell, Antonio Soriano-Asensi, and Jaume Segura-Garcia
- Subjects
IEEE 802.15.4 ,Computer Networks and Communications ,linear regression ,ultra wide band ,UNESCO::CIENCIAS TECNOLÓGICAS ,time synchronization ,wireless sensor networks - Abstract
The emergence of low cost Ultra Wide Band (UWB) transceivers has enabled the implementation of Wireless Sensor Networks (WSN) based on this communication technology. These networks are composed of distributed autonomous low cost nodes (also known as motes) with their own processing unit, memory and communications. Usually these nodes are power-limited and due to the poor performance and quality of their clocks, time synchronization is in the order of milliseconds and in some specific scenarios till microseconds. The integration of commercial UWB transceivers in these nodes can improve the synchronization accuracy. In particular, we focus on WSN nodes based on off-the-shelf commercial products and commodity hardware. In this paper we analyze step by step, from a practical and experimental point of view, the different elements involved in the time synchronization using UWB technology on WSN with static nodes. From our experimental results, with timestamps captured during the packet exchanges, we analyze and discuss the application of different communication schemes and simple statistical methods (in order to be run in WSN nodes). The results obtained with timestamps captured at the UWB transceivers and by using linear regression show that the lowest time synchronization error achieved between two nodes is 0.14 ns. Employing the same setup and performing the synchronization with the timestamps captured internally at the microcontrollers of the nodes, the error rises to 31 ns, due to the higher time period of the microcontrollers’ timers and the inaccuracies that affect the acquisition of the timestamps. Nevertheless, the synchronization of the microcontrollers’ clocks allows the setting up of a common time reference at the network nodes, enabling the implementation of applications with tight synchronization requirements, such as collaborative beamforming and ranging.
- Published
- 2022
40. Computationally efficient topology optimization of scale-free IoT networks
- Author
-
Muhammad Awais Khan and Nadeem Javaid
- Subjects
Computer Networks and Communications ,0805 Distributed Computing, 0906 Electrical and Electronic Engineering, 1005 Communications Technologies ,Networking & Telecommunications - Abstract
The malicious attacks in the scale-free Internet of Things (IoT) networks create a serious threat for the functionality of nodes. During the malicious attacks, the removal of high degree nodes greatly affects the connectivity of the remaining nodes in the networks. Therefore, ensuring the maximum connectivity among the nodes is an important part of the topology optimization. A good scale-free network has the ability to maintain the functionality of the nodes even if some of them are removed from the network. Thus, designing a robust network to support the nodes’ functionality is the aim of topology optimization in the scale-free networks. Moreover, the computational complexity of an optimization process increases the cost of the network. Therefore, in this paper, the main objective is to reduce the computational cost of the network with the aim of constructing a robust network topology. Thus, four solutions are presented to reduce the computational cost of the network. First, a Smart Edge Swap Mechanism (SESM) is proposed to overcome the excessive randomness of the standard Random Edge Swap Mechanism (RESM). Second, a threshold based node removal method is introduced to reduce the operation of the edge swap mechanism when an objective function converges at a point. Third, multiple attacks are performed in the network to find the correlation between the measures, which are degree, betweenness and closeness centralities. Fourth, based on the third solution, a Heat Map Centrality (HMC) is used that finds the set of most important nodes from the network. The HMC damages the network by utilizing the information of two positively correlated measures. It helps to provide a good attack strategy for robust optimization. The simulation results demonstrate the efficacy of the proposed SESM mechanism. It outperforms the existing RESM mechanism by almost 4% better network robustness and 10% less number of swaps. Moreover, 64% removal of nodes helps to reduce the computational cost of the network.
- Published
- 2022
41. A Novel Edge Computing Architecture Based on Adaptive Stratified Sampling
- Author
-
Hao-ran Yan, Jia-xu Wang, Peng Yang, Chen-hao Ni, Ting Zhang, De-gan Zhang, and Jie Zhang
- Subjects
Data stream ,Data processing ,Computer Networks and Communications ,Computer science ,Data stream mining ,Sampling (statistics) ,Data mining ,Simple random sample ,computer.software_genre ,computer ,Synthetic data ,Edge computing ,Stratified sampling - Abstract
With the development of the Internet of Things technology, the current amount of data generated by the Internet of Things system is increasing, and these data are continuously transmitted to the data center. The data processing and analysis of the traditional Internet of Things system are inefficient and can not handle such a large number of data streams. In addition, the IoT smart device has a resource-limited feature, which can not be ignored when analyzing data. This paper proposes a new architecture ApproxECIoT (Approximate Edge Computing Internet of Things, ApproxECIoT) suitable for real-time data stream processing of the Internet of Things. It implements a self-adjusting stratified sampling algorithm to process real-time data streams. The algorithm adjusts the size of the sample stratums according to the variance of each stratum while maintaining the given memory budget. This is beneficial to improve the accuracy of the calculation results when resources are limited. Finally, the experimental analysis was performed using synthetic datasets and real-world datasets, the results show that ApproxECIoT can still obtain high-accuracy calculation results when using memory resources similar to simple random sampling. In the case of synthetic data streams, when the sampling ratio is 10%, compared with CalculIoT, the accuracy loss of ApproxECIoT is reduced by 89.6%; compared with SRS, the accuracy loss of ApprxoECIoT is reduced by 99.8%. In the case of using the real data stream of the wireless sensor network, the performance of ApproxECIoT is not the best, but as the sampling ratio increases, the accuracy loss of ApproxECIoT decreases more than other frameworks.
- Published
- 2022
42. A resource allocation deep active learning based on load balancer for network intrusion detection in SDN sensors
- Author
-
Usman Ahmed, Jerry Chun-Wei Lin, and Gautam Srivastava
- Subjects
Computer Networks and Communications ,network performance ,security ,software-defined networking (SDN) ,intelligent load balancing ,VDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420 ,autonomous - Abstract
Dynamic traffic in a software-defined network (SDN) causes explosive data to flow from one system to another. The explosive data affects the functionality of system parameters, network-level configuration, routing parameters, network characteristics, and system load factors. Adapting to the traffic flow is a key research area in SDN in today’s big data world. Load balance vehicular sensor accessibility reduces delays, lowers energy consumption, and decreases the execution time. This paper combines the entropy-based active learning model to identify intrusion patterns efficiently, which is a packet-level intrusion detection model. The developed afterload balancing model can track the attack on the network. We then proposed a load balancing algorithm that optimizes the vehicular sensor usability by using sensor computing capability and source needs. We make use of a convergence-based mechanism to achieve high resource utilization. We then perform experiments on the state-of-the-art intrusion detection dataset. Our experimental results show that the load balancing mechanism can achieve 2x in performance improvements compared to traditional approaches. Thus, we can see that the designed model can help improve the decision boundary by increasing the training instance through pooling strategy and entropy uncertainty measure.
- Published
- 2022
43. Towards a fast and stable filter for RSSI-based handoff algorithms in dense indoor WLANs
- Author
-
Helga D. Balbi, Juan Lucas Vieira, Diego Passos, Luiz Magalhaes, Célio Albuquerque, and Ricardo C. Carrano
- Subjects
Handover ,Computer Networks and Communications ,Computer science ,Wireless network ,Sliding window protocol ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Real-time computing ,Testbed ,Filter (signal processing) ,Function (mathematics) ,Performance metric ,Stability (probability) - Abstract
In dense indoor wireless networks, handoffs occur frequently. The criteria to trigger handoffs are not defined by the IEEE 802.11 standard, thus being specific to each manufacturer’s implementation. Current handoff implementations typically use RSSI (Received Signal Strength Indicator) as a performance metric and commonly cause association instability in dense environments, a well-known problem referred to as the ping-pong effect. In this paper, we present a deep analysis of RSSI traces collected in dense indoor environments using the FIBRE testbed. Based on that, we conclude that the RSSI time series presents deep fast fades that occur frequently in bursts of small sizes, which can cause ping-pongs. Motivated by this behavior, we propose a new and simple filtering mechanism called Maximum which targets to eliminate these valleys in the RSSI time series. In a nutshell, this filter chooses the maximum RSSI value from a sliding window containing the last few RSSI samples of the series. We conduct simulations based on real RSSI traces from static and mobile scenarios to evaluate Maximum with respect to other filtering mechanisms found in the literature. Additionally, we present a simplified model of the behavior of Maximum that allows us to study the probability of unwanted handoffs as a function of the RSSI of the available access points. Our analysis reveals that Maximum is able to offer a better tradeoff between handoff triggering delay and stability in mobile scenarios, while also performing well in static scenarios, effectively avoiding the occurrence of ping-pongs in most cases.
- Published
- 2022
44. RAST: Rapid and energy-efficient network formation in TSCH-based Industrial Internet of Things
- Author
-
Mohamed Mohamadi, Badis Djamaa, and Mustapha Reda Senouci
- Subjects
Computer Networks and Communications ,Duty cycle ,business.industry ,Computer science ,Reliability (computer networking) ,Node (networking) ,Latency (audio) ,Energy consumption ,business ,Efficient energy use ,Network formation ,Computer network ,Communication channel - Abstract
The Time Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4 standard is expected to revolutionize the Industrial Internet of Things. Indeed, it can achieve high reliability and deterministic latency with a very low duty cycle. Nevertheless, forming a TSCH network with the standard approach might not be as efficient, constituting, thus, one of the TSCH’s major issues. Such a network formation process relies on nodes passively scanning for advertised Enhanced Beacon (EB) frames to join the network. Doing so, a node wishing to join a TSCH network may stay awake randomly scanning for EBs for a considerable period of time, leading to a lengthy formation process with excessive energy consumption. To deal with these issues, this paper presents a practical and effective Radio duty-cycled, Active-Scan based network formation process for TSCH networks (RAST). Our proposal leans on active-scan procedures combined with radio duty cycling mechanisms to shorten joining delays and reduce energy consumption. Obtained results from extensive and realistic simulations show that our solution is efficient and outperforms state-of-the-art solutions, regarding the association time and energy consumption by up to two orders of magnitude.
- Published
- 2022
45. Deep reinforcement learning based transmission policy enforcement and multi-hop routing in QoS aware LoRa IoT networks
- Author
-
Stephen Lynch, Ahsan Rafiq, Mohammad Hammoudeh, Mohammed Saleh Ali Muthanna, Ahmed A. Abd El-Latif, Ammar Muthanna, and Reem Alkanhel
- Subjects
Computer Networks and Communications ,Computer science ,Network packet ,business.industry ,Quality of service ,Node (networking) ,Header ,Frame (networking) ,Throughput ,Code rate ,business ,Network topology ,Computer network - Abstract
The LoRa wireless connectivity has become a de facto technology for intelligent critical infrastructures such as transport systems. Achieving high Quality of Service (QoS) in cooperative systems remains a challenging task in LoRa. However, high QoS can be achieved via optimizing the transmission policy parameters such as spreading factor, bandwidth, code rate and carrier frequency. Yet existing approaches have not optimized the complete LoRa parameters. Furthermore, the star of stars topology used by LoRa causes more energy consumption and a low packet reception ratio. Motivated by this, this paper presents transmission policy enforcement and multi-hop routing for QoS-aware LoRa networks (MQ-LoRa). A hybrid cluster root rotated tree topology is constructed in which gateways follow a tree topology and Internet of Things (IoT) nodes follow a cluster topology. A ‘membrane’ inspired form the cell tissues which form clusters to sharing the correct information. The membrane inspired clustering algorithm is developed to form clusters and an optimal header node is selected using the influence score. Data QoS ranking is implemented for IoT nodes where priority and non-priority information is identified by the new field of LoRa frame structure (QRank). The optimal transmission policy enforcement uses fast deep reinforcement learning called Soft Actor Critic (SAC) that utilizes the environmental parameters including QRank, signal quality and signal-to-interference-plus-noise-ratio. The transmission policy is optimized with respect to the spreading factor, code rate, bandwidth and carrier frequency. Then, a concurrent optimization multi-hop routing algorithm that uses mayfly and shuffled shepherd optimization to rank routes based on the fitness criteria. Finally, a weighted duty cycle is implemented using a multi-weighted sum model to reduce resource wastage and information loss in LoRa IoT networks. Performance evaluation is implemented using a NS3.26 LoRaWAN module. The performance is examined for various metrics such as packet reception ratio, packet rejection ratio, energy consumption, delay and throughput. Experimental results prove that the proposed MQ-LoRa outperforms the well-known LoRa methods.
- Published
- 2022
46. RAN energy efficiency and failure rate through ANN traffic predictions processing
- Author
-
Daniela Renga, Marco Ajmone Marsan, Michela Meo, and Greta Vallero
- Subjects
Base station lifetime ,Radio access network ,Artificial neural network ,Computer Networks and Communications ,Computer science ,Real-time computing ,Failure rate ,Energy consumption ,Neural network ,Traffic prediction ,Base station ,Energy efficiency ,Resource management ,Radio access network, Base station, Energy efficiency, Traffic prediction, Neural network, Base station lifetime ,Energy (signal processing) ,Efficient energy use - Abstract
In this paper, we focus on the application of ML tools to resource management in a portion of a Radio Access Network (RAN) and, in particular, to Base Station (BS) activation and deactivation, aiming at reducing energy consumption while providing enough capacity to satisfy the variable traffic demand generated by end users. In order to properly decide on BS (de)activation, traffic predictions are needed, and Artificial Neural Networks (ANN) are used for this purpose. Since critical BS (de)activation decisions are not taken in proximity of minima and maxima of the traffic patterns, high accuracy in the traffic estimation is not required at those times, but only close to the times when a decision is taken. This calls for careful processing of the ANN traffic predictions to increase the probability of correct decision. Numerical performance results in terms of energy saving and traffic lost due to incorrect BS deactivations are obtained by simulating algorithms for traffic predictions processing, using real traffic as input. Results suggest that good performance trade-offs can be achieved even in presence of non-negligible traffic prediction errors, if these forecasts are properly processed. The impact of forecast processing for dynamic resource allocation on the BS failure rate is also investigated. Results reveal that conservative approaches better prevent BSs from hardware failure. Nevertheless, the deployment of newer devices, designed for fast dynamic networks, allows the adoption of approaches which frequently activate and deactivate BSs, thus achieving higher energy saving.
- Published
- 2022
47. Protocol Reverse-Engineering Methods and Tools: A Survey
- Author
-
Yuyao Huang, Hui Shu, Fei Kang, and Guang Yan
- Subjects
Reverse engineering ,Computer Networks and Communications ,business.industry ,Computer science ,Process (engineering) ,Crossover ,computer.software_genre ,Data science ,Network management ,business ,Software analysis pattern ,Communications protocol ,Protocol (object-oriented programming) ,computer ,TRACE (psycholinguistics) - Abstract
The widespread utilization of network protocols raises many security and privacy concerns. To address them, protocol reverse-engineering (PRE) has been broadly applied in diverse domains, such as network management, security validation, and software analysis, by mining protocol specifications. This paper surveys the existing PRE methods and tools, which are based on network trace (NetT) or execution trace (ExeT), according to features representation. The feature-based protocol classification is proposed for the first time in literature to describe and compare different tools more clearly from a new perspective and to inspire crossover approaches in future works. We analyze the rationale, genealogy, contributions, and properties of 74 representative PRE methods/tools developed since 2004. In addition, we extend the general process of the PRE from a feature perspective and provide a detailed evaluation of the well-known methods/tools. Finally, we highlight the open issues and future research directions.
- Published
- 2022
48. Optimal caching scheme in D2D networks with multiple robot helpers
- Author
-
Yu Lin, Faming Cai, Feng Ke, Zhikai Liu, Hui Song, and Weizhao Yan
- Subjects
Scheme (programming language) ,Computer Networks and Communications ,CPU cache ,Wireless network ,Computer science ,Software deployment ,Distributed computing ,Robot ,Particle swarm optimization ,Mobile robot ,Energy consumption ,computer ,computer.programming_language - Abstract
Mobile robots are playing an important role in modern industries. The deployment of robots which act as mobile helpers in a wireless network is rarely considered in the existing studies of the device-to-device (D2D) caching schemes. In this paper, we investigate the optimal caching scheme for D2D networks with multiple robot helpers with large cache size. An improved caching scheme named robot helper aided caching (RHAC) scheme is proposed to optimize the system performance by moving the robot helpers to the optimal positions. The optimal locations of the robot helpers can be found based on partitioned adaptive particle swarm optimization (PAPSO) algorithm. And based on these two algorithms, we propose a mobility-aware optimization strategy for the robot helpers. The simulation results demonstrate that compared with other conventional caching schemes, the proposed RHAC scheme can bring significant performance improvements in terms of hitting probability, cost, delay and energy consumption. Furthermore, the location distribution and mobility of the robot helpers are studied, which provides a reference for introducing robot helpers into different scenarios such as smart factories.
- Published
- 2022
49. Buffer-loss estimation to address congestion in 6LoWPAN based resource-restricted ‘Internet of Healthcare Things’ network
- Author
-
Narottam Chand, Lalit Kumar Awasthi, Himanshu Verma, and Naveen Chauhan
- Subjects
Protocol stack ,Queueing theory ,Channel capacity ,Computer Networks and Communications ,business.industry ,Computer science ,Network packet ,Node (networking) ,business ,6LoWPAN ,Queue ,Wireless sensor network ,Computer network - Abstract
The Internet of Healthcare Things (IoHT) consists of a wide variety of resource-restricted, heterogeneous, IoT-enabled, wearable/non-wearable medical equipment (things) that connect over the internet to transform traditional healthcare into a smart, connected, proactive, patient-centric healthcare system. The pivotal functions of the 6LoWPAN protocol stack enable comprehensive integration of such networks from wearable wireless sensor networks (W-WSN) to IoHT, as TCP/IP does not suffice the requirements of IoHT networks. As a result, the congestion in the IoHT network increases with a growing number of devices, resulting in loss of critical medical information due to buffer loss and channel loss, which is unacceptable. In this paper, we explored different applications of patient-centric IoHT architectures to draw a realistic resource-limited topological layout of IoHT for congestion estimation. After critically reviewing existing congestion schemes for 6LoWPANs, we proposed an effective buffer-loss estimation model based on the Queuing Theory to determine the number of packets lost at the node’s buffer. The buffer is modeled as an M/M/1/K Markov Chain Queue. The M/M/1/K Queue equilibrium equation is used to establish a relationship between the probabilities of the buffer being empty or completely filled. We derived the expressions for total buffer-loss probability and expected mean packet delay for the resource-constraint IoHT network. Furthermore, to validate the buffer-loss estimation, an analytical model is used to compare buffer-loss probabilities, the number of packets dropped at leaf/intermediate nodes and the number of packets successfully received at the local sink node. The results show a close correlation between both the models on varying values of the number of leaf nodes, buffer size, offered packet load and available channel capacity. Thus, in resource-restrictive IoHT, the proposed model performs better than two related works.
- Published
- 2022
50. How does the traffic behavior change by using SUMO traffic generation tools
- Author
-
Pablo Barbecho Bautista, Mónica Aguilar Igartua, and Luis Urquiza Aguiar
- Subjects
Mobility model ,Computer Networks and Communications ,Computer science ,Wireless ad hoc network ,business.industry ,Node (networking) ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Context (language use) ,Mobile ad hoc network ,Traffic intensity ,Graph (abstract data type) ,business ,Traffic generation model ,Computer network - Abstract
Simulations are the traditional approach used by the research community to evaluate mobile ad hoc networks. Mainly, vehicular ad hoc networks (VANETs) are a particular type of mobile ad hoc networks that raise specific technical challenges. When assessing VANETs, it is crucial to use realistic mobility models and traffic demand to produce meaningful results. In this context, vehicular traces affect vehicles’ signal strengths, radio interference, and channel occupancy. This paper provides a thorough analysis of the influence of using the different SUMO’s traffic demand generation tools on mobility and node connectivity. Using the data traffic in the district of Gracia in Barcelona (Spain), we analyze the generated traffic demand in terms of traffic measures: (i) traffic intensity, (ii) trip time/distance, (iii) emissions, and (iv) re-routing capabilities. This last feature allow cars to re-compute their routes in front of congestion situations. Then, we analyze the available tools in terms of resources usage (CPU, RAM, disk). Lastly, we analyze the node’s connectivity using well-known graph metrics. Our results provide insights into the behavior of the vehicle’s mobility and the nodes’ connectivity of SUMO demand generation tools. Additionally, we propose an automatized tool that facilitates researchers the generation of synthetic traffic based on real data.
- Published
- 2022
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.