5,936 results on '"Networking hardware"'
Search Results
202. Information transmission system based on visual recognition between internal and external networks under physical isolation
- Author
-
Chen Li, Fengyi Li, Weihan Tian, Xin Jin, Xiaodong Li, Biao Wang, and Wang Xin
- Subjects
Transmission (telecommunications) ,business.industry ,Computer science ,Code (cryptography) ,File transfer ,Isolation (database systems) ,Line (text file) ,business ,Computer hardware ,Information exchange ,Networking hardware ,Data transmission - Abstract
At present, the most commonly used and simplest physical isolation scheme is equipped with two sets of computers to access the internal network and the external network respectively, but this brings great inconvenience to the information exchange and use. Based on this, this paper designs and implements a system of file transfer between internal and external network devices under physical isolation based on visual recognition. On the basis of the realization of QR code picture transmission, it also realizes the QR code video transmission mode for the transmission of large files, which further improves the transmission efficiency. In addition, the system increased according to the instruction to fetch another computer pictures or database information and back to the original computer function, more in line with the actual use of demand.
- Published
- 2021
203. SOAR: Minimizing Network Utilization with Bounded In-network Computing
- Author
-
Gabriel Scalosub, Chen Avin, and Raz Segal
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer science ,business.industry ,Distributed computing ,Scale (chemistry) ,Big data ,020206 networking & telecommunications ,Workload ,02 engineering and technology ,Networking hardware ,Reduction (complexity) ,Computer Science - Networking and Internet Architecture ,Tree (data structure) ,Computer Science - Distributed, Parallel, and Cluster Computing ,020204 information systems ,Computer Science - Data Structures and Algorithms ,0202 electrical engineering, electronic engineering, information engineering ,Data Structures and Algorithms (cs.DS) ,Soar ,Distributed, Parallel, and Cluster Computing (cs.DC) ,business ,Field-programmable gate array - Abstract
In-network computing via smart networking devices is a recent trend for modern datacenter networks. State-of-the-art switches with near line rate computing and aggregation capabilities are developed to enable, e.g., acceleration and better utilization for modern applications like big data analytics, and large scale distributed and federated machine learning. We formulate and study the problem of activating a limited number of in-network computing devices within a network, aiming at reducing the overall network utilization for a given workload. Such limitations on the number of in-network computing elements per workload arise, e.g., in incremental upgrades of network infrastructure, and are also due to requiring specialized middleboxes, or FPGAs, that should support heterogeneous workloads, and multiple tenants. We present an optimal and efficient algorithm for placing such devices in tree networks with arbitrary link rates, and further evaluate our proposed solution in various scenarios and for various tasks. Our results show that having merely a small fraction of network devices support in-network aggregation can lead to a significant reduction in network utilization. Furthermore, we show that various intuitive strategies for performing such placements exhibit significantly inferior performance compared to our solution, for varying workloads, tasks, and link rates.
- Published
- 2021
204. Birds of a Feather Flock Together
- Author
-
Sumit Kumar Monga, Sanidhya Kashyap, and Changwoo Min
- Subjects
Remote direct memory access ,business.industry ,Computer science ,Scalability ,Synchronization (computer science) ,Throughput ,Latency (engineering) ,business ,Network topology ,Networking hardware ,Computer network ,Scheduling (computing) - Abstract
RDMA-capable networks are gaining traction with datacenter deployments due to their high throughput, low latency, CPU efficiency, and advanced features, such as remote memory operations. However, efficiently utilizing RDMA capability in a common setting of high fan-in, fan-out asymmetric network topology is challenging. For instance, using RDMA programming features comes at the cost of connection scalability, which does not scale with increasing cluster size. To address that, several works forgo some RDMA features by only focusing on conventional RPC APIs. In this work, we strive to exploit the full capability of RDMA, while scaling the number of connections regardless of the cluster size. We present Flock, a communication framework for RDMA networks that uses hardware provided reliable connection. Using a partially shared model, Flock departs from the conventional RDMA design by enabling connection sharing among threads, which provides significant performance improvements contrary to the widely held belief that connection sharing deteriorates performance. At its core, Flock uses a connection handle abstraction for connection multiplexing; a new coalescing-based synchronization approach for efficient network utilization; and a load-control mechanism for connections with symbiotic send-recv scheduling, which reduces the synchronization overheads associated with connection sharing along with ensuring fair utilization of network connections. We demonstrate the benefits for a distributed transaction processing system and an in-memory index, where it outperforms other RPC systems by up to 88% and 50%, respectively, with significant reductions in median and tail latency.
- Published
- 2021
205. A Malware Distribution Simulator for the Verification of Network Threat Prevention Tools
- Author
-
Jeong-Nyeo Kim and Song-Yi Hwang
- Subjects
Computer science ,Botnet ,Denial-of-service attack ,TP1-1185 ,computer.software_genre ,Computer security ,IoT malware ,Biochemistry ,Article ,Analytical Chemistry ,propagation ,Electrical and Electronic Engineering ,Instrumentation ,Access network ,business.industry ,Chemical technology ,diffusion ,tool ,Infrastructure security ,Atomic and Molecular Physics, and Optics ,Networking hardware ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Malware ,The Internet ,business ,verification ,computer ,Private network - Abstract
With the expansion of the Internet of Things (IoT), security incidents about exploiting vulnerabilities in IoT devices have become prominent. However, due to the characteristics of IoT devices such as low power and low performance, it is difficult to apply existing security solutions to IoT devices. As a result, IoT devices have easily become targets for cyber attackers, and malware attacks on IoT devices are increasing every year. The most representative is the Mirai malware that caused distributed denial of service (DDoS) attacks by creating a massive IoT botnet. Moreover, Mirai malware has been released on the Internet, resulting in increasing variants and new malicious codes. One of the ways to mitigate distributed denial of service attacks is to render the creation of massive IoT botnets difficult by preventing the spread of malicious code. For IoT infrastructure security, security solutions are being studied to analyze network packets going in and out of IoT infrastructure to detect threats, and to prevent the spread of threats within IoT infrastructure by dynamically controlling network access to maliciously used IoT devices, network equipment, and IoT services. However, there is a great risk to apply unverified security solutions to real-world environments. In this paper, we propose a malware simulation tool that scans vulnerable IoT devices assigned a private IP address, and spreads malicious code within IoT infrastructure by injecting malicious code download command into vulnerable devices. The malware simulation tool proposed in this paper can be used to verify the functionality of network threat detection and prevention solutions.
- Published
- 2021
- Full Text
- View/download PDF
206. NMAP: Power Management Based on Network Packet Processing Mode Transition for Latency-Critical Workloads
- Author
-
Ki-Dong Kang, Mohammad Alian, Hyosang Kim, Gyeongseo Park, Daehoon Kim, and Nam Sung Kim
- Subjects
Power management ,Computer science ,Network packet ,business.industry ,Packet processing ,Energy consumption ,Interrupt ,Polling ,Frequency scaling ,business ,Networking hardware ,Computer network - Abstract
Processor power management exploiting Dynamic Voltage and Frequency Scaling (DVFS) plays a crucial role in improving the data-center’s energy efficiency. However, we observe that current power management policies in Linux (i.e., governors) often considerably increase tail response time (i.e., violate a given Service Level Objective (SLO)) and energy consumption of latency-critical applications. Furthermore, the previously proposed SLO-aware power management policies oversimplify network request processing and ignore the fact that network requests arrive at the application layer in bursts. Considering the complex interplay between the OS and network devices, we propose a power management framework exploiting network packet processing mode transitions in the OS to quickly react to the processing demands from the received network requests. Our proposed power management framework tracks the transitions between polling and interrupt in the network software stack to detect excessive packet processing on the cores and immediately react to the load changes by updating the voltage and frequency (V/F) states. Our experimental results show that our framework does not violate SLO and reduces energy consumption by up to 35.7% and 14.8% compared to Linux governors and state-of-the-art SLO-aware power management techniques, respectively.
- Published
- 2021
207. Deploying Fake Network Devices to Obtain Sensitive User Data
- Author
-
Miroslav Dulik
- Subjects
SIMPLE (military communications protocol) ,Software deployment ,business.industry ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Microsoft Windows ,Local area network ,business ,Communications protocol ,Networking hardware ,Computer network - Abstract
This article focuses on plug & play network protocols - SSDP and WSD and security aspects. These protocols facilitate installation and deployment of new network devices in local networks. Implementation of these protocols is simple but lacks any security features. Therefore, malicious user or intruder can easily misuse them to collect sensitive data and files. As these protocols are also implemented in MS Windows, implementation of these protocols and possible attacks using fake devices deployed in local network is described.
- Published
- 2021
208. Implementing Data Security in Delay Tolerant Network in Post-disaster Management
- Author
-
Samir Pramanick and Chandrima Chakrabarti
- Subjects
Delay-tolerant networking ,business.industry ,Computer science ,Wireless ad hoc network ,Node (networking) ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Data security ,Wireless ,Communication source ,business ,Encryption ,Networking hardware ,Computer network - Abstract
Disaster causes severe destruction to physical infrastructures. As a result, communication infrastructure has been getting disrupted for weeks. Wireless ad-hoc networks use mobile devices to deliver services. In any critical situation, ad-hoc network acts as delay-tolerant network (DTN). DTN is resource-constrained network, where nodes are required to cooperate with each other to relay messages in store-carry-forward feature. These messages are re-addressed to other nodes based on prearranged criteria and finally are conveyed to a destination node via multiple hops. Meanwhile, during the transmission of message from sender node to receiver node, the privileged message may be disclosed to the other node except sender node and receiver node. So, message should be encrypted by the sender and decrypted by the receiver to maintain proper data security in a DTN network; there may be periodic disruptions or long delays in the connection between the network devices. Opportunistic network environment (ONE) simulator is used for performance evaluation and comparison with other state-of-the-art schemes.
- Published
- 2021
209. AI-assisted intent-based traffic grooming in a dynamically shared 5g optical fronthaul network
- Author
-
Min Zhang, Chunyu Zhang, Yihan Gui, Hui Yang, Luyao Guan, Danshi Wang, and Anthony C. Boucouvalas
- Subjects
Computer science ,business.industry ,Testbed ,Network topology ,Atomic and Molecular Physics, and Optics ,Networking hardware ,Traffic grooming ,Optics ,Bandwidth (computing) ,Resource allocation ,Cluster analysis ,business ,5G ,Computer network - Abstract
The extremely high number of services with large bandwidth requirements and the increasingly dynamic traffic patterns of cell sites pose major challenges to optical fronthaul networks, rendering them incapable of coping with the extensive, uneven, and real-time traffic that will be generated in the future. In this paper, we first present the design of an adaptive graph convolutional network with gated recurrent unit (AGCN-GRU) network to learn the temporal and spatial dependencies of traffic patterns of cell sites to provide accurate traffic predictions, in which the AGCN model can capture potential spatial relations according to the similarity of network traffic patterns in different areas. Then, we innovatively consider how to deal with the unpredicted burst traffic and propose an AI-assisted intent-based traffic grooming scheme to realise automatic and intelligent cell sites clustering and traffic grooming. Finally, a software-defined testbed for 5G optical fronthaul network was established, on which the proposed schemes were deployed and evaluated by considering traffic datasets of existing optical networks. The experimental results showed that the proposed scheme can optimize network resource allocation, increase the average efficient resource utilization and reduce the average delay and the rejection ratio.
- Published
- 2021
210. EveryWAn- An Open Source SD-WAN solution
- Author
-
Nicola Blefari-Melazzi, Carmine Scarpitta, Pier Luigi Ventre, Francesco Lombardo, and Stefano Salsano
- Subjects
Settore ING-INF/03 ,Software ,Computer science ,Wide area network ,business.industry ,SD-WAN ,Cloud computing ,Service provider ,Architecture ,Software-defined networking ,business ,Networking hardware ,Computer network - Abstract
Software Defined Wide Area Network (SD-WAN) was originally proposed as an alternative solution to redesign the architecture of the WAN. Like its technology precursor Software Defined Networking, SD-WAN was aiming at simplifying the management and operation of the networks (with a particular focus on WAN scenarios) by decoupling the networking hardware from its control programs and using software and open APIs to abstract the infrastructure and manage the connectivity and the services. The SD-WAN architecture leverages SDN principles to securely build interconnections between users and the applications hosted in the clouds or in remote branches, by leveraging any combination of transport services With this paper we shed some light on the SD-WAN scenario and describe an open-source implementation which can be taken as reference. We call this architecture EveryWAn.It has been designed with SDN and NFV principles in mind, and leverages Cloud best practices to deliver to the WAN customers and the MSP the same benefits and the agility of the Cloud service providers. Moreover, we strongly believe in the openness of the SDN/NFV paradigms which can ease the development of new services and can foster the innovation in the SD-WAN deployments.
- Published
- 2021
211. Ultrafast machine vision with artificial neural network devices based on a GaN-based micro-LED array
- Author
-
Runze Lin, Zhenpeng Wang, Xugao Cui, Pengfei Tian, and Daopeng Qu
- Subjects
Artificial neural network ,business.industry ,Machine vision ,Computer science ,Visible light communication ,Photodetector ,Biasing ,Atomic and Molecular Physics, and Optics ,Reconfigurable computing ,Networking hardware ,Acceleration ,Optics ,Electronic engineering ,business - Abstract
GaN-based micro-LED is an emerging display and communication device, which can work as well as a photodetector, enabling possible applications in machine vision. In this work, we measured the characteristics of micro-LED based photodetector experimentally and proposed a feasible simulation of a novel artificial neural network (ANN) device for the first time based on a micro-LED based photodetector array, providing ultrafast imaging (∼133 million bins per second) and a high image recognition rate. The array itself constitutes a neural network, in which the synaptic weights are tunable by the bias voltage. It has the potentials to be integrated with novel machine vision and reconfigurable computing applications, acting as a role of acceleration and similar functionality expansion. Also, the multi-functionality of micro-LED broadens its application potentials of combining ANN with display and communication.
- Published
- 2021
212. Intrusion Detection Using Deep Learning
- Author
-
Shahbaz Siddiqui, Misbah Anwer, Adnan Akhunzada, and Ghufran Ahmed
- Subjects
Smart system ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,Intrusion detection system ,Machine learning ,computer.software_genre ,Networking hardware ,Data modeling ,Task (project management) ,CUDA ,Artificial intelligence ,business ,computer - Abstract
Advancement in network attacks requires strong and growing security mechanisms. Internet of Things (IoT) is the evolving technology connected billion of devices right now and it builds on a set of network devices therefore it is under serious threat concern. Identifying attacks is crucial and critical task. The authors propose a hybrid DL driven approach to detect the attacks, one is Cuda Deep Neural Network Long Short-Term Memory (CuDNNLSTM) and another is Long Short-Term Memory (LSTM) on kitsune dataset. There is alarming situation for protection of all the smart systems in terms of security. In this paper we implemented LSTM and cuDNN LSTM networks to identify attack. Results show that our technique cuDNNLSTM outperforms in comparison of deep learning technique LSTMs that shows 99.79% accuracy on 6GB dataset approx. (2S0lac) records.
- Published
- 2021
213. SDN Augmented Network Management For Future Avionics Communications Network
- Author
-
Doanh-Kim Luong, Muhammad Ali, Kanaan Abdo, Yim Fun Hu, and Rameez Asif
- Subjects
NETCONF ,Network architecture ,business.industry ,computer.internet_protocol ,Network packet ,Computer science ,Simple Network Management Protocol ,Networking hardware ,Network management ,Forwarding plane ,business ,Software-defined networking ,computer ,Computer network - Abstract
The process of network management has always been complicated and challenging in traditional networks. Network operators in traditional networks have to cope with unavoidable constraints of low level vendor specific configurations to implement network policies on different networking devices from multiple vendors. Despite many previous proposals to make networks easier to manage, there were still several complicated and scalability issues remained unresolved particularly in scenarios where multivendor networking devices are deployed over large networks. This rigid behavior of vendor specific underlying infrastructure devices configuration and maintenance results in very few possibilities for innovation, enhancement, and improvement since the networking devices have generally been closed and proprietary. This lack of coherence in different vendor specific networking infrastructure devices was acting as a point of main hindrance in making the network management process a simple and easy to operate for network operators. The future avionics communications network being heterogeneous wireless communications network as presented in Future Communications Infrastructure (FCI) document also foresees similar network management issues.The paradigm of Software Defined Networking (SDN) proposes an effective way of tackling the issues in network management by separating the control and management plane from the data plane. The core idea of SDN is to keep networking devices i.e. switches and routers in the forwarding or data plane, responsible for simply forwarding data packets. A logically centralized software program i.e. SDN controller will be responsible of all the intelligent work of control plane operations. It also allows the network to be more flexible and programable. Using this paradigm, the SDN introduces new routes of innovations and opportunities for network management and configuration methods. In this paper an SDN based network management architecture is presented for future avionics communication network based on FCI. This paper also presents a comparison of network management process in traditional and SDN based architecture with the underlying configuration and management protocols i.e. SNMP, OF-Config, NetConf, RestConf and Yang model. The issues in the state-of-the-art network management such as configurations and scalability issues are highlighted with alternative solutions in SDN based architecture. The SDN based network management functions are applied to future avionics communications network architecture. Some example scenarios have also been demonstrated for network management functions such as configuration/reconfiguration of infrastructure layers devices using SDN controller with a global view of network.
- Published
- 2021
214. A High-Level Ontology Network for ICT Infrastructures
- Author
-
Jhon Toledo, Hu Peng, David Chaves-Fraga, Mingxue Wang, Oscar Corcho, Nicholas Burrett, Puchao Zhang, Julián Arenas-Guerrero, José Mora, and Carlos Badenes-Olmedo
- Subjects
Informática ,Configuration management ,Virtual machine ,Computer science ,Server ,Configuration management database ,Context (language use) ,Microservices ,Ontology (information science) ,computer.software_genre ,Data science ,computer ,Networking hardware - Abstract
The ICT infrastructures of medium and large organisations that offer ICT services (infrastructure, platforms, software, applications, etc.) are becoming increasingly complex. Nowadays, these environments combine all sorts of hardware (e.g., CPUs, GPUs, storage elements, network equipment) and software (e.g., virtual machines, servers, microservices, services, products, AI models). Tracking, understanding and acting upon all the data produced in the context of such environments is hence challenging. Configuration management databases have been so far widely used to store and provide access to relevant information and views on these components and on their relationships. However, different databases are organised according to different schemas. Despite existing efforts in standardising the main entities relevant for configuration management, there is not yet a core set of ontologies that describes these environments homogeneously, and which can be easily extended when new types of items appear. This paper presents an ontology network created with the purpose of serving as an initial step towards an homogeneous representation of this domain, and which has been already used to produce a knowledge graph for a large ICT company.
- Published
- 2021
215. An Ml-Based Memory Leak Detection Scheme for Network Devices
- Author
-
Yang Xin'an, Jiangxuan Xie, Minghui Wang, and Xiangqiao Ao
- Subjects
Scheme (programming language) ,business.industry ,Computer science ,Memory leak detection ,business ,computer ,Computer hardware ,Networking hardware ,computer.programming_language - Abstract
The network is very important to the normal operation of all aspects of society and economy, and the memory leak of network device is a software failure that seriously damages the stability of the system. Some common memory checking tools are not suitable for network devices that are running online, so the operation staff can only constantly monitor the memory usage and infer from experience, which has been proved to be inefficient and unreliable. In this paper we proposed a novel memory leak detection method for network devices based on Machine learning. It first eliminates the impact of large-scale resource table entries on the memory utilization. Then, by analyzing its monotonicity and computing the correlation coefficient with the memory leak sequence sets pre constructed by simulation, the memory leak fault can be found in time. The simulation experiments show that the scheme is computationally efficient and the precision rate is close to 100%, it works well in the actual network environment, and has excellent performance.
- Published
- 2021
216. An Intelligent Fault Location Approach Using Fuzzy Logic for Improving Autonomous Network
- Author
-
Jung Pei, Kuan-Yu Nie, Chih-Wei Chang, and Chien-Chi Kao
- Subjects
Computer science ,business.industry ,Distributed computing ,Ant colony optimization algorithms ,Leverage (statistics) ,Root cause ,Fault (power engineering) ,business ,Maintenance engineering ,Fuzzy logic ,Automation ,Networking hardware - Abstract
In recent years, Internet Service Providers (ISPs) are expected to enable various practical services. To meet the requirements of services, the infrastructure of networks has become more and more complex. In telecom networks, the complex infrastructure implies that it would be difficult to analyze the root causes and to locate the faults. In the telecom companies, network maintenance staffs need to spend a lot of time to trace the root cause and solve the network problems. An intelligent fault location approach allows ISPs to be cost-effective, and can assist humans in decision-making and increase automation. To automatically locate the faults, we apply both the Ant Colony Optimization (ACO) algorithm and fuzzy logic methods, and the main contributions of this paper are threefold: (1) we apply the pheromones of the ACO algorithm to quantify the risks that network devices might fail; (2) based on the risks, we leverage fuzzy logic, including the fuzzy relation matrix and the max-min composition method, to infer the fault location; (3) for improving autonomous network, we implemented and evaluated the proposed intelligent fault location approach using the real data in telecom networks.
- Published
- 2021
217. Energy-Aware Resource Scheduling in FoG Environment for IoT-Based Applications
- Author
-
Shelly Garg, Sumit Kumar, Mamta Mittal, and Rajeev Tiwari
- Subjects
Cloud computing architecture ,Computer science ,Smart city ,Server ,Distributed computing ,Resource allocation ,Architecture ,Networking hardware ,Energy (signal processing) ,Domain (software engineering) - Abstract
Internet of Things (IoT) has been developed as a heterogeneous environment that contains network devices with limited resources. The application of IoT principles in the smart city domain creates new opportunities and requires diligent implementation mechanisms for optimal resource utilization. With time, the IoT applications tend to generate and forward a huge amount of data in the smart cities and require a real-time response from the servers. Due to this, the traditional cloud computing architecture is unable to handle the latency-sensitive applications efficiently, and hence, the FoG architecture has been widely implemented with IoT devices to efficiently retrieve or forward the data. For the comprehensive utilization of the resources in the FoG systems-based smart cities, various energy-aware resource allocation schemes have been discussed in this chapter. The schemes suggest different mechanisms to access the required contents with minimal energy consumptions for the applications that are used in smart cities.
- Published
- 2021
218. Multiple-Layer-Topology Discovery Method Using Traffic Information
- Author
-
Kyoko Yamagoe, Naoki Hayashi, Mizuto Nakamura, Toshihiko Seki, and Takada Atsushi
- Subjects
Service (systems architecture) ,Consistency (database systems) ,Computer science ,Management system ,Topology (electrical circuits) ,Layer (object-oriented design) ,Network topology ,Topology ,Network operations center ,Networking hardware - Abstract
In the course of network operations, telecommunications carriers must have accurate topology information of the network related to the failure to identify quickly causes of service failures and determine their impacts. However, telecommunications carrier networks are divided into multiple layers according to their roles, and each layer has a different management system, making it difficult to detect the topology information between different layers. Therefore, there is a need for a technology that can assure accurate topology information of the multiple layers network. We propose a new topology discovery method based on the consistency of the traffic of mutually connected interfaces. We verify the effectiveness of the proposed method using traffic data from network equipment in a commercial network and show that the proposed method is capable of detecting topologies with high accuracy.
- Published
- 2021
219. Planter: seeding trees within switches
- Author
-
Changgang Zheng and Noa Zilberman
- Subjects
Ensemble forecasting ,Computer science ,Server ,Distributed computing ,Data classification ,Overhead (computing) ,Seeding ,Electrical efficiency ,Networking hardware ,Random forest - Abstract
Data classification within the network brings significant benefits in reaction time, servers offload and power efficiency. Still, only very simple models were mapped to the network. In-network classification will not be useful unless we manage to map complex machine learning models to network devices. We present Planter, an algorithm that maps a variety of ensemble models, such as XGBoost and Random Forest, to programmable switches. By overlapping trees within coded tables, Planter manages to map ensemble models to switches with high accuracy and low resource overhead.
- Published
- 2021
220. Mobile Network Traffic Prediction Using High Order Markov Chains Trained At Multiple Granularity
- Author
-
Idio Guarino, Giuseppe Aceto, Antonio Pescape, and Alfredo Nascita
- Subjects
Network planning and design ,symbols.namesake ,Markov chain ,Computer science ,Reliability (computer networking) ,Distributed computing ,Cellular network ,symbols ,Markov process ,Provisioning ,Telecommunications network ,Networking hardware - Abstract
Over the years, the need for communication networks capable of providing an ever-increasing set of services has grown. In order to satisfy user requirements and provide guarantees of reliability of the network itself, efficient techniques are required for analysis, evaluation and design. For this reason, the need arises to have models able to represent the peculiar characteristics of network traffic and to produce reliable predictions of its behavior in an adequate period of time. Therefore, network traffic prediction plays an important role by supporting many practical applications, ranging from network planning and provisioning to security. Several works so far have focused on building app-specific models. However, this choice produces multiple models that need to be properly managed and deployed across network devices. Therefore, in this paper, we explore different training strategies to reduce the number of models, adopting the Markov Chains to model mobile video apps traffic at packet-level. We discuss and experimentally evaluate the prediction effectiveness of the proposed approaches by comparing the performance of app models with models trained on a specific category of video apps and a model trained on the mix of all video traffic.
- Published
- 2021
221. QUIC Protocol Based Monitoring Probes for Network Devices Monitor and Alerts
- Author
-
Deepali Kamthania and Anurag Sharma
- Subjects
Intranet ,business.industry ,computer.internet_protocol ,Computer science ,Node (networking) ,QUIC ,IPv4 ,Networking hardware ,IPv6 ,Transport layer ,business ,Host (network) ,computer ,Computer network - Abstract
In the coming days, 5G networking needs faster software automation in the existing environment. With this view in this paper, an attempt has been made to formulate an approach for enhancing the HTTP based monitoring without affecting the current services. The automation can be performed using modern web solutions. A node is considered as the smallest host inside an intranet and internet network based on the physical or logical grouping of multiple networks. It acts as host computers when identified by an IP address (ipv4 or ipv6) and when connected with the host to many clients it is identified through its network subnets. The communication among these nodes can be improved by using UDP based HTTP3.0 along with TLS 1.3 for GET or POST requests through UDP streams. The traditional HTTP stack ordinary monitoring can be upgraded through QUIC Protocol for a faster and more efficient approach in future networks and real-time monitoring. In the QUIC based HTTPS scenario, it has been observed that the load time takes less than 200 ms in network latency, which results in a faster approach as compared to handshake between host to client and vice versa during the previous HTTP introduced approaches. The slow response time results in wait, which causes a penalty. The suggested approach can be beneficial at a network node for monitoring the connected nodes in the network by sending QUIC protocol-based transport layer beacons in certain time-lapse, resulting better and faster alerting in information technology infrastructures.
- Published
- 2021
222. Supporting Real-Time ${T}$-Queries on Network Traffic with a Cloud-Based Offloading Model
- Author
-
Yuanda Wang, Ye Xia, Haibo Wang, Shigang Chen, and Chaoyi Ma
- Subjects
Network management ,Memory management ,business.industry ,Computer science ,Network processor ,Real-time computing ,Center (category theory) ,Forwarding plane ,Cloud computing ,Enhanced Data Rates for GSM Evolution ,business ,Networking hardware - Abstract
Traffic measurement provides fundamental statistics for network management functions. To implement the measurement modules on the data plane for real-time query response, modern sketches are designed to work with limited on-die memory allocation from network processors and collect traffic statistics in epochs of a preset length. To handle real-time queries at arbitrary times over traffic in a preceding period ${T}$ (called ${T}$ -queries), the prior art sets the epoch length to ${T \over n}$ and keeps the measurement results in a window of $n - 1$ past epochs to support approximate ${T}$ -queries. Such an approach however drastically increases the memory cost or decreases the accuracy in the query results if the memory allocation is fixed. In this paper, motivated by the concept of offloading in today's edge-cloud computing, we propose a collaborative edge-center traffic measurement model, where the traffic measurement modules at all network devices form the edge, which offloads the traffic measurement results to a measurement center possibly hosted in a datacenter. The center synthesizes the measurements from the past epochs and sends the aggregate results back to the measurement modules to support T-queries. We conduct experiments using real traffic traces to evaluate the performance of the proposed edge-center measurement model. The experimental results demonstrate that the proposed designs significantly outperform the prior art.
- Published
- 2021
223. Daps: A Dynamic Asynchronous Progress Stealing Model for MPI Communication
- Author
-
Zizhong Chen, Min Si, Pavan Balaji, Kaiming Ouyang, and Astushi Hori
- Subjects
Process (engineering) ,Asynchronous communication ,Computer science ,Distributed computing ,Node (networking) ,Models of communication ,Thread (computing) ,Networking hardware ,Spawn (computing) ,Data modeling - Abstract
MPI provides nonblocking point-to-point and one-sided communication models to help applications achieve communication and computation overlap. These models provide the opportunity for MPI to offload data transfer to low level network hardware while the user process is computing. In practice, however, MPI implementations have to often handle complex data transfer in software due to limited capability of network hardware. Therefore, additional asynchronous progress is necessary to ensure prompt progress of these software-handled communication. Traditional mechanisms either spawn an additional background thread on each MPI process or launch a fixed number of helper processes on each node. Both mechanisms may degrade performance in user computation due to statically occupied CPU resources. The user has to fine-tune the progress resource deployment to gain overall performance. For complex multiphase applications, unfortunately, severe performance degradation may occur due to dynamically changing communication characteristics and thus changed progress requirement. This paper proposes a novel Dynamic Asynchronous Progress Stealing model, called Daps, to completely address the asynchronous progress complication. Daps is implemented inside the MPI runtime. It dynamically leverages idle MPI processes to steal communication progress tasks from other busy computing processes located on the same node. The basic concept of Daps is straightforward; however, various implementation challenges have to be resolved due to the unique requirements of interprocess data and code sharing. We present our design that ensures high performance while maintaining strict program correctness. We compare Daps with state-of-the-art asynchronous progress approaches by utilizing both microbenchmarks and HPC proxy applications.
- Published
- 2021
224. On the Quantum Performance Evaluation of Two Distributed Quantum Architectures
- Author
-
Matthew Skrzypczyk, Stephanie Wehner, and Gayane Vardoyan
- Subjects
FOS: Computer and information sciences ,Quantum network ,Quantum Physics ,Computer Science - Performance ,Computer science ,Computer Networks and Communications ,Markov chain ,FOS: Physical sciences ,Quantum architecture ,Networking hardware ,Quantum technology ,Performance (cs.PF) ,Computer Science::Hardware Architecture ,Computer engineering ,Interfacing ,Quantum state ,Hardware and Architecture ,Modeling and Simulation ,Fidelity ,Quantum Physics (quant-ph) ,Quantum ,Throughput (business) ,Software ,Quantum computer - Abstract
Distributed quantum applications impose requirements on the quality of the quantum states that they consume. When analyzing architecture implementations of quantum hardware, characterizing this quality forms an important factor in understanding their performance. Fundamental characteristics of quantum hardware lead to inherent tradeoffs between the quality of states and traditional performance metrics such as throughput. Furthermore, any real-world implementation of quantum hardware exhibits time-dependent noise that degrades the quality of quantum states over time. Here, we study the performance of two possible architectures for interfacing a quantum processor with a quantum network. The first corresponds to the current experimental state of the art in which the same device functions both as a processor and a network device. The second corresponds to a future architecture that separates these two functions over two distinct devices. We model these architectures as continuous-time Markov chains and compare their quality of executing quantum operations and producing entangled quantum states as functions of their memory lifetimes, as well as the time that it takes to perform various operations within each architecture. As an illustrative example, we apply our analysis to architectures based on Nitrogen- Vacancy centers in diamond, where we find that for presentday device parameters one architecture is more suited to computation-heavy applications, and the other for networkheavy ones. We validate our analysis with the quantum network simulator NetSquid. Besides the detailed study of these architectures, a novel contribution of our work are several formulas that connect an understanding of waiting time distributions to the decay of quantum quality over time for the most common noise models employed in quantum technologies. This provides a valuable new tool for performance evaluation experts, and its applications extend beyond the two architectures studied in this work.
- Published
- 2022
225. A Comparative Analysis of Deep Learning Approaches in Intrusion Detection System
- Author
-
S G Balakrishnan and Abhijit Das
- Subjects
Computer science ,Network security ,business.industry ,Deep learning ,Intrusion detection system ,Computer security ,computer.software_genre ,Automation ,Networking hardware ,Systems design ,Mobile technology ,False alarm ,Artificial intelligence ,business ,computer - Abstract
With computers' growth, network-based technology, including advanced communication features, Internet of Things (IoT), automation, and upcoming fifth generation (5G) mobile technology, network security has become challenging to secure applications, systems, and networks. Rapid increase of network devices has created many new attacks and therefore presented significant difficulties for network security to identify threats correctly. Intrusion detection systems (IDSs) guarantee the network's confidentiality, integrity, and availability by monitoring network traffic and blocking any potential intrusions. Despite significant research efforts, IDS still confronts many difficulties in increasing detection efficiency and decreasing false alarm rates. Intrusion detection systems are using machine learning and deep learning-based IDS to identify intrusions throughout the network as quickly as possible. This paper identifies the idea of IDS and the following details the taxonomy developed based on the prominent Machine Learning (ML) and Deep Learning (DL) methods used in NIDS system design. An in-depth analysis of NIDS-based papers is discussed to outline the benefits and weaknesses of the various options. New technology, including ML and DL, and current advances in these NIDS technologies are described with the methodology, assessment metrics, and dataset selections. Through highlighting the existing research difficulties with the longterm scope of the NIDS study to improve machine learning and deep learning-based NIDS. Many novel techniques are using to deal with Intrusion detection systems. However, most are not quick enough to adapt to cyber security defense systems' dynamic and complex nature as the threats surface is growing exponentially with different devices' interfaces. This paper proposes a review of various Intrusion detection system (IDS) capabilities and assets using deep learning techniques. It also suggests a novel idea to automatically adapt the network intrusion detection of the cyber defense architecture to automatically reduce the false alarm rate and optimize the time for detection processes.
- Published
- 2021
226. A Pseudonym-based Anonymous Routing Mechanism under Multi-controller SDN Architecture
- Author
-
Xiaohui Yao, Liuzi Zhan, Xiaohan Zhang, and Qin Qiao
- Subjects
Network architecture ,Traffic analysis ,Multicast ,Computer science ,business.industry ,Communication source ,Routing (electronic design automation) ,business ,Software-defined networking ,Networking hardware ,Computer network ,Anonymity - Abstract
Anonymous communication technology has been proposed to conceal the identity and communication relationship of both parties in communication, mitigating the threats of communication surveillance. However, the traditional anonymous communication scheme is designed on the traditional network architecture, network devices have limited control over the network, which leads to the inefficiency of anonymous communication. Software Defined Networks (SDN) is a novel network architecture that manages network through controller. It has the characteristics of high efficiency and flexibility. However, in recent years, researches about anonymous communication of SDN exist many challenges, such as limitation of scenarios and lack of anonymity. To address these challenges, this paper, inspired by pseudonym changing, proposes a pseudonym-based anonymous routing mechanism under multi-controller SDN architecture, in which pseudonym-changing is used for hiding the identity of the communicating parties. Besides, to avoid routing error caused by address conflicts, an effective hash checking method is proposed to avoid anonymous address conflicts. Based on that, we also present an anonymity enhanced scheme that employs phantom routing to hide the sender in the routing path and a multicast mechanism to prevent traffic analysis attacks. Anonymity analysis and experiment show that this scheme can provide strong anonymity protection for distributed SDN without much time cost, and has good practicability.
- Published
- 2021
227. A Novel Objective Function for Frequency Switching Cost Aware RPL Algorithm
- Author
-
Sercan Demirci and Ferhat Arat
- Subjects
Routing protocol ,Mathematical optimization ,Computer science ,business.industry ,Spectrum management ,Networking hardware ,IPv6 ,law.invention ,Cognitive radio ,law ,Internet Protocol ,The Internet ,Routing (electronic design automation) ,business ,Algorithm - Abstract
Internet of Things (IoT) technologies and the deficiency of Internet Protocol (IP) addresses due to the increasing number of internet devices have brought the use of Internet Protocol Version 6 (IPv6) with a larger addressing structure. Spectrum scarcity and inefficient spectrum using, as well as lack of address, is an important problem in the consumption of resources on the network. Cognitive Radio Networks (CRN) technology has been introduced in order to prevent these problems and to use the existing spectrum band more efficiently. With CRN, it is intended to be used by unlicensed users when there is no communication in the spectrum bands which are assigned to licensed users. Routing is vital to the communication of network devices. RPL algorithm is a routing protocol used in IoT networks. In this study, a CR-enabled routing function will be defined as an Objective Function (OF) on the RPL algorithm used for IoT devices. In the current IoT networks, devices communicate on a single frequency and do not perform the frequency switching function. The proposed routing objective function chooses the routing path, considering the switching cost. The proposed frequency switching aware RPL algorithm will be compared with the RPL (Pure RPL) in terms of performance metrics such as energy consumption and energy efficiency.
- Published
- 2021
228. Are WANs Ready for Optical Topology Programming?
- Author
-
William B. Jensen, Matthew Nance Hall, Paul Barford, Ramakrishnan Durairajan, Manya Ghobadi, and Klaus-Tycho Foerster
- Subjects
Optical layer ,Computer science ,Fiber (computer science) ,Control reconfiguration ,Topology (electrical circuits) ,Topology ,Span (engineering) ,Networking hardware - Abstract
In today's wide-area networks, the optical layer is a relatively static and inflexible commodity. In response, Optical Topology Programming (OTP) has been proposed to enable fast and flexible reconfiguration of wavelengths at the optical layer from higher layers. We answer whether WANs are ready for OTP, concluding they are not. We reach this judgement by measuring reconfiguration delay on a long-haul fiber span. To push the needle on OTP towards feasibility, we show how to reduce the time to provision a circuit by an order of magnitude---from minutes to seconds. Finally, we propose a method to quickly store and load optical network equipment settings, reducing the time to less than 1 second.
- Published
- 2021
229. Secure Keyed Hashing on Programmable Switches
- Author
-
Sophia Yoo and Xiaoqi Chen
- Subjects
Computer science ,business.industry ,Semantics (computer science) ,Pipeline (computing) ,Cyclic redundancy check ,Hash function ,Forwarding plane ,Cryptographic hash function ,business ,SipHash ,Networking hardware ,Computer network - Abstract
Cyclic Redundancy Check (CRC) is a computationally inexpensive function readily available in many high-speed networking devices, and thus it is used extensively as a hash function in many data-plane applications. However, CRC is not a true cryptographic hash function, and it leaves applications vulnerable to attack. While cryptographically secure hash functions exist, there is no fast and efficient implementation for such functions on high-speed programmable switches. In this paper, we introduce an implementation of a secure keyed hash function optimized for commodity programmable switches and capable of running entirely within the data plane. We implement HalfSipHash on the Barefoot Tofino switch by using dependency management schemes to conserve pipeline stages and slicing semantics for concise circular bit shift operations. We show that our efficient implementation performs 67 million, 90 million, 150 million, and 304 million hashes per second for 32-byte, 24-byte, 16-byte, and 8-byte input strings, respectively.
- Published
- 2021
230. Cost-effective capacity provisioning in wide area networks with Shoofly
- Author
-
John F. Arnold, Jamie Gaudette, Yawei Yin, Sharon Shoham, Rachee Singh, and Nikolaj Bjørner
- Subjects
Network planning and design ,Computer science ,business.industry ,Traffic engineering ,Optical link ,Cloud computing ,Provisioning ,business ,Network topology ,Resilience (network) ,Networking hardware ,Computer network - Abstract
In this work we propose Shoofly, a network design tool that minimizes hardware costs of provisioning long-haul capacity by optically bypassing network hops where conversion of signals from optical to electrical domain is unnecessary and uneconomical. Shoofly leverages optical signal quality and traffic demand telemetry from a large commercial cloud provider to identify optical bypasses in the cloud WAN that reduce the hardware cost of long-haul capacity by 40%. A key challenge is that optical bypasses cause signals to travel longer distances on fiber before re-generation, potentially reducing link capacities and resilience to optical link failures. Despite these challenges, Shoofly provisions bypass-enabled topologies that meet 8X the present-day demands using existing network hardware. Even under aggressive stochastic and deterministic link failure scenarios, these topologies save 32% of the cost of long-haul capacity.
- Published
- 2021
231. Remote Production System Concept Utilizing Optical Networks and Proof-of-concept for 8K Production
- Author
-
Koichi Takasugi, Daisuke Shirai, Yasuhiro Mochida, and Takahiro Yamaguchi
- Subjects
Computer science ,business.industry ,Networking hardware ,law.invention ,Uncompressed video ,Optical path ,Broadcasting (networking) ,Outside broadcasting ,Proof of concept ,law ,Internet Protocol ,business ,Graphical user interface ,Computer network - Abstract
Remote production is an emerging concept for outside broadcasting (OB) enabled using Internet Protocol (IP)-based production systems. Because multi-channel uncompressed video signals are transmitted to the broadcasting station without editing, expensive OB vans and editing crews are not required to be dispatched to the event venue. Therefore, the cost for OB should be substantially reduced. Although long-distance transmissions of uncompressed video and time synchronization of distributed IP-video transceivers are challenging, the application of optical networks is promising. The network equipment needs to be configured accordingly to utilize optical networks; however, production crews are not familiar with the network configuration. In this paper, we propose a remote production system that configures the network equipment as well as the IP-video transceivers in accordance with the requirements of video transmissions. We also report a proof-of-concept implementation for 8K production. The optical transponders as well as the IP-video transceivers were configured by selecting a sender and a receiver in the graphical user interface, and uncompressed 8K video was transmitted over the dynamically created optical path. In addition, another optical path could be created for seamless protection. The results of this study demonstrate that the system enables users to utilize optical networks without being aware of the network configurations.
- Published
- 2021
232. Root Cause Analysis in 5G/6G Networks
- Author
-
Rui L. Aguiar, Dinis Canastro, Ricardo Rocha, Diogo Gomes, and Mário Antunes
- Subjects
Root (linguistics) ,Computer science ,business.industry ,Process (engineering) ,Distributed computing ,Cellular network ,Graph (abstract data type) ,Cloud computing ,Root cause ,Root cause analysis ,business ,Networking hardware - Abstract
Network Softwarization, the process by which network equipment's are being replaced by software running in a cloud environment, is playing an important role in the transformation of next-generation networks starting with 5G. In this environment, the use of AI/ML aided network automation to perform management tasks is vital to provide a more reliable and cost-effective network. An aspect in which AI/ML can play an important role is on the quick diagnose and track of root causes of anomalies. In this paper, we propose a system capable of automating root cause analysis in a cellular network scenario. By collecting log files from negative-impacting events occurring throughout the whole network levels, we are able to find the correlation between them according to a series of defined rules and track down the root cause using a graph-based approach to dependencies
- Published
- 2021
233. Optimization of open flow controller placement in software defined networks
- Author
-
Raghda Salam Al mahdawi and Huda M. Salih
- Subjects
Software defined network ,Network architecture ,General Computer Science ,business.industry ,Computer science ,Distributed computing ,Big data ,Network topology ,Networking hardware ,Network interface controller ,Control theory ,Open flow controller ,Network controller ,Electrical and Electronic Engineering ,business ,Software-defined networking ,Computer networks - Abstract
The world is entering into the era of Big Data where computer networks are an essential part. However, the current network architecture is not very convenient to configure such leap. Software defined network (SDN) is a new network architecture which argues the separation of control and data planes of the network devices by centralizing the former in high level, centralised devices and efficient supervisors, called controllers. This paper proposes a mathematical model that helps optimizing the locations of the controllers within the network while minimizing the overall cost under realistic constrains. Our method includes finding the minimum cost of placing the controllers; these costs are the network latency, controller processing power and link bandwidth. Different types of network topologies have been adopted to consider the data profile of the controllers, links of controllers and locations of switches. The results showed that as the size of input data increased, the time to find the optimal solution also increased in a non-polynomial time. In addition, the cost of solution is increased linearly with the input size. Furthermore, when increasing allocating possible locations of the controllers, for the same number of switches, the cost was found to be less.
- Published
- 2021
234. Undergraduate students’ device preferences in the transition to online learning
- Author
-
Kelum A. A. Gamage and E. N. C. Perera
- Subjects
business.product_category ,020205 medical informatics ,Higher education ,Computer science ,media_common.quotation_subject ,online learning ,educational technologies ,Social Sciences ,02 engineering and technology ,computer.software_genre ,Literacy ,Software portability ,digital skills ,0202 electrical engineering, electronic engineering, information engineering ,Internet access ,ComputingMilieux_COMPUTERSANDEDUCATION ,media_common ,Multimedia ,business.industry ,Transition (fiction) ,05 social sciences ,050301 education ,General Social Sciences ,COVID-19 ,Networking hardware ,Blended learning ,device use ,Key (cryptography) ,business ,0503 education ,computer - Abstract
The global higher education sector has been greatly affected by the COVID-19 pandemic, and the mode of delivery has transformed into a blended learning mode of delivery or fully remote mode. Online delivery significantly demands reliable and stable internet access and technology, at both the lecturer’s and students’ ends. This paper investigates the challenges and barriers to accessibility of technologies used for remote delivery of learning and teaching. The paper also investigates key digital skills students need to help them develop and enhance their technology literacy. A survey was also conducted among 555 university undergraduate students to identify their choice of device to connect to remote learning during the transition to online learning. It was revealed that students used laptops and smartphones considerably and least relied on desktop computers. The results indicate the significance of a device’s portability, built-in network hardware and cost. Further, it identifies the impacts of accessibility of educational technologies on students’ learning experience.
- Published
- 2021
235. Critical Analysis of Virtual LAN and Its Advantages for the Campus Networks
- Author
-
Saampatii Vakharkar and Nitin Sakhare
- Subjects
Router ,Class (computer programming) ,Computer science ,Virtual LAN ,business.industry ,Network packet ,Association (object-oriented programming) ,Troubleshooting ,Networking hardware ,law.invention ,law ,business ,Ip address ,Computer network - Abstract
One of the sizzling regions of the networking systems is the VLAN technology. A VLAN permits network devices to be combined as virtual LANs in logical association rather than a physical association. In this paper, we have done critical analysis of VLAN. We have also seen the advantages of using VLAN as it helps to create multiple networks with one class of IP address, and by restricting inter-VLAN communication we can allow or deny the users from accessing a particular network. The basic need of implementing VLAN is the breaking of networks, and to better understand this, we have shown VLAN configuration on Cisco Packet Tracer. VLANs have various advantages, but in our study, we have found out that VLANs are used for many things which were not originally meant for it.
- Published
- 2021
236. Implementation of Layer 2 MPLS VPN on the SDN Hybrid Network using Ansible and ONOS Controllers
- Author
-
Bayu Guntur Arif Saputra, Kukuh Nugroho, and Syariful Ikhwan
- Subjects
Computer science ,business.industry ,computer.internet_protocol ,Throughput ,Multiprotocol Label Switching ,Network topology ,Networking hardware ,Packet loss ,Forwarding plane ,Layer 2 MPLS VPN ,business ,Software-defined networking ,computer ,Computer network - Abstract
L2VPN AToM (Any Transport over MPLS) is an ethernet-based private communication service that can connect networks in different geographical locations but seems to be on the same network logically through MPLS networks on the same bridge domain. L2VPN AToM technology is a consideration in enterprise networks to manage the availability of data sources centrally, safely, and quickly with other branch offices. L2VPN AToM becomes inefficient and inflexible in managing large networks formed between the head office and branch offices which must be configured manually one by one as many as the number of branch offices to be connected. In managing the L2VPN AToM network configuration, it can be done centrally to improve network efficiency by using the SDN architecture. However, this architecture requires a long time in the transition process and considering the costs incurred to replace conventional network devices that are already operating. For the solution, Hybrid SDN (Software Defined Network) network technology is needed as a solution to centrally manage conventional networks. Hybrid SDN Network is a technology that separates the control plane and the data plane by utilizing conventional network devices to be applied to the SDN architecture. This research used ansible as a controller that will distribute conventional network configuration centrally and an ONOS controller as a traffic management service for L2VPN AToM traffic. The research result shows that Hybrid SDN networks are better than conventional networks with different average values of throughput in 3.12%, 2.12% of delay, and 0.3% packet loss.
- Published
- 2021
237. A practical approach for applying machine learning in the detection and classification of network devices used in building management
- Author
-
Maroun Touma, Shalisha Witherspoon, Isabelle Crawford-Eng, and Shonda Witherspoon
- Subjects
Feature engineering ,Computer science ,business.industry ,Deep learning ,ensemble ,BACnet ,Feature selection ,QA75.5-76.95 ,General Medicine ,Machine learning ,computer.software_genre ,binary classifier ,Networking hardware ,Critical infrastructure ,SCADA ,Electronic computers. Computer science ,Artificial intelligence ,business ,computer ,model zoo ,Building automation - Abstract
With the increasing deployment of smart buildings and infrastructure, Supervisory Control and Data Acquisition (SCADA) devices and the underlying IT network have become essential elements for the proper operations of these highly complex systems. Of course, with the increase in automation and the proliferation of SCADA devices, a corresponding increase in surface area of attack on critical infrastructure has increased. Understanding device behaviors in terms of known and understood or potentially qualified activities versus unknown and potentially nefarious activities in near-real time is a key component of any security solution. In this paper, we investigate the challenges with building robust machine learning models to identify unknowns purely from network traffic both inside and outside firewalls, starting with missing or inconsistent labels across sites, feature engineering and learning, temporal dependencies and analysis, and training data quality (including small sample sizes) for both shallow and deep learning methods. To demonstrate these challenges and the capabilities we have developed, we focus on Building Automation and Control networks (BACnet) from a private commercial building system. Our results show that ”Model Zoo” built from binary classifiers based on each device or behavior combined with an ensemble classifier integrating information from all classifiers provides a reliable methodology to identify unknown devices as well as determining specific known devices when the device type is in the training set. The capability of the Model Zoo framework is shown to be directly linked to feature engineering and learning, and the dependency of the feature selection varies depending on both the binary and ensemble classifiers as well.
- Published
- 2021
238. Development and Implementation of Counselor Work Management Information System based on MySQL and Data Center
- Author
-
Hongmei Chen, Xi Wang, Jingjing Li, and Junwei Li
- Subjects
Management information systems ,Engineering management ,Software ,Work (electrical) ,Computer science ,business.industry ,Computer cluster ,ComputingMilieux_COMPUTERSANDEDUCATION ,Information system ,The Internet ,Data center ,business ,Networking hardware - Abstract
Counselors are guides and intimate friends for the healthy growth of college students. Counselors' work efficiency has a direct impact on the overall efficiency of school student management. Due to many factors, such as the arrangement of college counselors, the working mechanism of students, the number of students and the proportion of college counselors, the efficiency of student management in many colleges and universities is low. At the same time, the number of counselors can not meet the actual needs of the complex student management. Data center can be understood as data centralized centralized processing center. It usually consists of one or more computer clusters and supporting network equipment, storage equipment, security equipment, power system, management and software, etc. This paper studies the information system of counselor's work management based on database and data center.
- Published
- 2021
239. A crosstalk-aware and energy-saving survivable RSCA for online prioritized traffic in SDM-EONs
- Author
-
Uma Bhattacharya, Monish Chatterjee, and Smita Paira
- Subjects
business.industry ,Computer science ,Bandwidth (signal processing) ,Survivability ,Redundancy (engineering) ,Energy consumption ,Routing (electronic design automation) ,business ,Blocking (statistics) ,Multipath propagation ,Networking hardware ,Computer network - Abstract
Inter-core crosstalk due to the presence of multiple cores is one of the major issues to be taken care of in space division multiplexing based elastic optical networks (SDM-EONs). Again due to the random occurrence of spectral fragmentations in online SDM-EON, many high-priority connections get blocked. The underlying SDM-EON also suffers from ample data loss even if a single link in the network fails. Survivability is ensured in this paper by using multipath based approach. Again, survivability issue deserves enough redundancy in the network devices which again increases energy consumption in the network. Multipath based survivability is used in this paper. To address the aforementioned issues simultaneously, this paper proposes a novel energy-efficient and crosstalk-aware multipath based survivable routing, spectrum and core allocation scheme ECM-P-RSCA, for prioritized traffic in online SDM-EONs. Extensive simulation results prove the efficacy of ECM-P-RSCA over a non-prioritized similar scheme in terms of various network parameters such as bandwidth blocking, energy consumption, spectral occupation rate, crosstalk generated per slot ratio etc. It is also observed that with increase in number of cores in SDM-EON, crosstalk per slot ratio increases.
- Published
- 2021
240. Towards Extracting Semantics of Network Config Blocks
- Author
-
Akashi Osamu, Hiroshi Esaki, Kazuki Otomo, Kimihiro Mizutani, Kensuke Fukuda, and Satoru Kobayashi
- Subjects
Syntax (programming languages) ,Semantic similarity ,Block (programming) ,Computer science ,Programming language ,Key (cryptography) ,Context (language use) ,computer.software_genre ,Cluster analysis ,Semantics ,computer ,Networking hardware - Abstract
Configuring network devices is a main task of network operators. However, understanding and consistently updating network configuration files (config) is not an easy task especially in a large-scale and complicated networks. In this paper, we propose a semantic approach to provide better understanding of such config files, different from syntax based approaches. The key idea of the work is to extract semantics of blocks of the config files by document embedding techniques in NLP. This extraction enables us to understand context of config blocks with semantic similarity metrics instead of syntax similarity ones. Furthermore, this approach can be naturally extended to additional technical documents such as vendor’s manual documents to add more specific information on the semantics of configs. We first discuss the quality of the obtained semantics for several embedding techniques, by using clustering evaluations. We next demonstrate the effectiveness of our approach with two case studies with real network configs: (1) similar config block detection and (2) automatic labeling of config block with vendor’s documents.
- Published
- 2021
241. Diseño e implementación de un sistema de monitoreo de red para infraestructura de campus usando agentes de software
- Author
-
Rodrigo Ivan Espinel Villalobos, Jorge Eduardo Ortiz Triviño, Erick Ardila Triana, and Henry Zárate Ceballos
- Subjects
snmp ,network monitoring ,Computer science ,parallelization ,multi-agent system ,monitoreo de redes ,sistemas distribuidos ,distributed systems ,Network architecture ,business.industry ,Multi-agent system ,General Engineering ,paralelización ,Building and Construction ,Network monitoring ,sistema multi-agente ,Simple Network Management Protocol ,Engineering (General). Civil engineering (General) ,Networking hardware ,Network management ,Software agent ,TA1-2040 ,business ,Management information base ,Computer network ,SNMP - Abstract
In network management and monitoring systems, or Network Management Stations (NMS), the Simple Network monitoring Protocol (SNMP) is normally used, with which it is possible to obtain information on the behavior, the values of the variables, and the status of the network architecture. network. However, for large corporate networks, the protocol can present latency in data collection and processing, thus making real-time monitoring difficult. This article proposes a multi-agent system based on layers, with three types of agents. This includes the collector agent, which uses a Management Information Base (MIB) value to collect information from the network equipment, an input table of information from the network devices for the consolidator agent to process the collected data and leave it in a consumable format, and its subsequent representation by the application agent as a web service, in this case, as a heat map. RESUMEN En los sistemas de administración y monitoreo de redes o Network Management Stations (NMS), normalmente se utiliza el protocolo Simple Network Monitoring Protocol (SNMP), con el cual es posible obtener información sobre el comportamiento, los valores de las variables y el estado de la arquitectura de red. Sin embargo, para las grandes redes corporativas, el protocolo puede presentar latencia en la recopilación y el procesamiento de datos, lo que dificulta el monitoreo en tiempo real. Este artículo propone un sistema multi-agente basado en capas con tres tipos de agentes. Esto incluye el agente recolector que utiliza un valor MIB(Management Information Base) para recolectar información de los equipos de red, una tabla de entrada de información de los dispositivos de red para que el agente consolidador realice el procesamiento de los datos recolectada y los deje en un formato consumible y su subsiguiente representación por parte del agente de aplicación como un servicio web, en este caso como un mapa de calor.
- Published
- 2021
242. Demo: Disaggregated Dataplanes
- Author
-
Nik Sultana, Rakesh Nagda, Boon Thau Loo, and Heena Nagda
- Subjects
Flexibility (engineering) ,business.product_category ,Source code ,Computer science ,business.industry ,media_common.quotation_subject ,computer.software_genre ,Networking hardware ,Computer network programming ,Documentation ,Scripting language ,Internet access ,business ,Software engineering ,computer ,Heterogeneous network ,media_common - Abstract
Modern programmable network hardware enables in-network computing-pushing increasingly-complex logic into the network to improve the performance, flexibility and reliability of network services. But the current network programming paradigm is constrained to programming a single network device at a time. The lack of support for in-network programs that use several and heterogeneous network hardware simultaneously constrains the scale and behaviour of in-network programs. Dataplane Disaggregation is a new paradigm that addresses this problem. It distributes computations across programmable network hardware including switches and smart NICs. This paradigm transforms a monolithic in-network program into a distributed system executing on possibly heterogeneous resources. The goal of this demo is to make an accessible presentation of Dataplane Disaggregation to the wider distributed systems community. This is intended to stimulate discussion on effective ways to program distributed and heterogeneous systems. Our demo is based on the Flightplan system prototype. Flightplan is open-source and comes with detailed documentation and support scripts, yet it requires some effort to set up and run. This impedes its study by others. Our demo runs completely in the browser and does not burden viewers with any installation effort at all. The technical contribution of this demo consists of a customised visualisation of Flightplan experiments. Moreover, the demo is well-suited to virtual events—as is being planned for ICDCS'21—since it can be run independently and asynchronously by viewers of the demo. This is especially helpful for viewers with slow or intermittent Internet connections. We make the demo's source code freely available online for use by others, including researchers who want to build similar demos.
- Published
- 2021
243. A Quantitative Causal Analysis for Network Log Data
- Author
-
Richard Jarry, Satoru Kobayashi, and Kensuke Fukuda
- Subjects
Structure (mathematical logic) ,Computer science ,computer.internet_protocol ,Root cause ,computer.software_genre ,Networking hardware ,Causality (physics) ,Set (abstract data type) ,syslog ,Data mining ,Time series ,Root cause analysis ,computer ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
Data logs from network devices are primary data to understand the current status of operational networks. However, since many and heterogeneous devices generate network logs, extracting information on the network status from such logs is not an easy task in network operation, e.g., root cause analysis of network events. Though multi-variate time-series based log analyses extract correlation structure of the logs, identifying causality of the network logs is still a complex and challenging problem. The state of the art algorithm called the PC algorithm had been applied to network log analysis, but it has two fundamental limitations; (1) Generated graphs still have many undirected edges, and (2) Edges have no weight (whether plausible causality or not). To overcome these two limitations, in this paper, we rely on MixedLiNGAM to network log analysis; This algorithm produces weighted DAGs from a set of multivariate log time series. In order to show the effectiveness of the proposed method, we apply MixedLiNGAM to a set of syslog data collected at a research and education network in Japan, and then compare output causal graphs generated by MixedLiNGAM and the PC algorithm. Our result demonstrates that obtained weighted directional edges help better understand the root cause of the network events.
- Published
- 2021
244. Super-Cloudlet: Rethinking Edge Computing in the Era of Open Optical Networks
- Author
-
Behzad Mirkhanzadeh, Tianliang Zhang, Andrea Fumagalli, Miguel Razo-Razo, and Marco Tacca
- Subjects
Ethernet ,business.industry ,Computer science ,Quality of service ,Data center ,Cloudlet ,Enhanced Data Rates for GSM Evolution ,Software-defined networking ,business ,Networking hardware ,Edge computing ,Computer network - Abstract
Edge computing is an attractive architecture to efficiently provide compute resources to many applications that demand specific QoS requirements. The edge compute resources are in close geographical proximity to where the applications’ data originate from and/or are being supplied to, thus avoiding unnecessary back and forth data transmission with a data center far away. This paper describes a federated edge computing system in which compute resources at multiple edge sites are dynamically aggregated together to form distributed super-cloudlets and best respond to varying application-driven loads. In its simplest form a super-cloudlet consists of compute resources available at two edge computing sites or cloudlets that are (temporarily) interconnected by dedicated optical circuits deployed to enable low-latency and high-rate data exchanges. A super-cloudlet architecture is experimentally demonstrated over the largest public OpenROADM optical network testbed up to date consisting of commercial equipment from six suppliers. The software defined networking (SDN) PROnet Orchestrator is upgraded to both concurrently manage the resources offered by the optical network equipment, compute nodes, and associated Ethernet switches and achieve three key functionalities of the proposed super-cloudlet architecture, i.e., service placement, auto-scaling, and offloading.
- Published
- 2021
245. Haechi: A Token-based QoS Mechanism for One-sided I/Os in RDMA based Storage System
- Author
-
Qingyue Liu and Peter Varman
- Subjects
Memory management ,Remote direct memory access ,Distributed database ,business.industry ,Computer science ,Quality of service ,Server ,Throughput ,business ,Security token ,Networking hardware ,Computer network - Abstract
Advances in persistent memory and networking hardware are changing the architecture of storage systems and data management services in datacenters. Distributed, one-sided RDMA access to memory-resident data shows tremendous improvements in throughput, latency and server CPU utilization of storage servers. However, the silent nature of one-sided I/O simultaneously creates new challenging problems for providing QoS in such systems. In this paper, we propose Haechi, a work-conserving, token-based QoS mechanism to guarantee reservations and limits in storage systems that provide one-sided I/O services. Haechi decouples QoS enforcement into a QoS engine at the client and a QoS monitor at the data node. It leverages adaptive token dispatch, token conversion, and silent I/O reporting to guarantee the reservations of distributed clients while maintaining high server utilization. Empirical evaluations on the Chameleon cluster, with different reservation distributions and I/O access patterns, show that Haechi is successful in providing differentiated QoS with negligible overhead for token management.
- Published
- 2021
246. Logistic Regression Analysis of Online Course Click Rate
- Author
-
Sun Jia
- Subjects
Classroom teaching ,Coronavirus disease 2019 (COVID-19) ,Multimedia ,Computer science ,Online learning ,Online course ,Active learning ,ComputingMilieux_COMPUTERSANDEDUCATION ,Logistic regression ,computer.software_genre ,computer ,Networking hardware ,Learning behavior - Abstract
The advantages of online learning are very significant. With the popularization of network technology and the development of network hardware equipment and mobile terminals, the development of technology provides a foundation for the rapid development of online learning. Due to the outbreak of the COVID-19 epidemic in 2020, classroom teaching cannot be carried out. Therefore, many schools use online platforms to implement online teaching. From the traditional face-to-face teaching as the main body, online teaching supplements, to a single online teaching form. At present, online teaching mainly includes three forms: live broadcast, recorded broadcast and online self-directed learning. At the same time, a large amount of student activity data is generated on the teaching platform. Take the online teaching data of colleges and universities in the first half of 2020 as an example, with the help of the mining tool RapidMiner to mine and analyze teaching data, student click-through rates and student performance data. Deeply analyze the implementation of online teaching and the mechanism of various influencing factors, and use logistic regression algorithms to predict whether students will pass the exam. Through student learning: On the online learning platform, students' learning content is the same, but the difference is their learning behavior on the platform. The click-through rate reflects the learning behavior of students in the data, and the level of performance is closely related to the degree of active learning of students.
- Published
- 2021
247. LiONv2: An Experimental Network Construction Tool Considering Disaggregation of Network Configuration and Device Configuration
- Author
-
Yuki Nagai, Fumio Teraoka, Hiroki Watanabe, and Takao Kondo
- Subjects
NETCONF ,computer.internet_protocol ,business.industry ,Computer science ,Node (networking) ,Virtualization ,computer.software_genre ,Network topology ,Networking hardware ,Software deployment ,business ,computer ,Virtual network ,Protocol (object-oriented programming) ,Computer network - Abstract
An experimental network environment plays an important role to examine new systems and protocols. We have developed an experimental network construction tool called LiONv1 (Lightweight On-Demand Networking, ver.1). LiONv1 satisfies the following four requirements: programmer-friendly configuration file based on Infrastructure as Code, multiple virtualization technologies for virtual nodes, physical topology conscious virtual node placement, and L3 protocol agnostic virtual networks. None of existing experimental network environments satisfy all the four requirements. In this paper, we develop LiONv2 which satisfies three more requirements: diversity of available network devices, Internet-scale deployment, and disaggregation of network configuration and device configuration. LiONv2 employs NETCONF and YANG to achieve diversity of available network devices and Internet-scale deployment. LiONv2 also defines two YANG models which disaggregate network configuration and device configuration. LiONv2 is implemented in Go and C languages with public libraries for Go. Measurement results show that construction time of a virtual network is irrelevant to the number of virtual nodes if a single virtual node is created per physical node.
- Published
- 2021
248. Leveraging In-Network Computing and Programmable Switches for Streaming Analysis of Scientific Data
- Author
-
Raj Kettimuthu, Joaquin Chung, and Ganesh C. Sankaran
- Subjects
Stream processing ,Focus (computing) ,Load management ,SIMPLE (military communications protocol) ,Computer science ,Distributed computing ,Normalization (image processing) ,Approximation algorithm ,Load balancing (computing) ,Networking hardware - Abstract
With the emergence of programmable network devices that match the performance of fixed function devices, several recent projects have explored in-network computing, where the processing that is traditionally done outside the network is offloaded to the network devices. In-network computing has typically been applied to network functions (e.g., load balancing, NAT, and DNS), caching, data reduction/aggregation, and coordination/consensus functions. In some cases it has been used to accelerate stream-processing tasks that involve small payloads and simple operations. In this work we focus on leveraging in-network computing for stream processing of scientific datasets with large payloads that require complex operations such as floating-point computations and logarithmic functions. We demonstrate in-network computing for a real-world scientific application performing streaming normalization of a 2-D image from a light source experiment. We discuss the challenges we encountered and potential approaches to address them.
- Published
- 2021
249. Mind the Semantic Gap: Policy Intent Inference from Network Metadata
- Author
-
Faraz Ahmed, Charles F. Clark, Puneet Sharma, Anu Mercian, and Shaun Wackerly
- Subjects
Metadata ,Structure (mathematical logic) ,Computer science ,Network security policy ,Inference ,Troubleshooting ,Semantics ,Data science ,Networking hardware ,Semantic gap - Abstract
Network Policy management is a tedious and laborious task because of scale and dynamic changes in the network. The advent of Softwarized Networks has led to a renewed interest in intent-based network policy management. Intent-based Networking provides a structured way of specifying the intent of policies which are automatically translated and compiled into network device configuration. While this top-down approach of policy intent to policy configuration has worked well for cloud-native infrastructures such as data centers, it has not seen much adoption in legacy networks. We believe one of the primary reasons for this is the semantic gap between policy intents and policy configurations. The problem is further exacerbated by the heterogeneity, scale-on-the-fly, fragmentation, and lack of structure in non-intent native networks. We introduce Policy Intent Inference (PII) System to bridge the semantic gap with its advanced inference layer that extracts the policy intents from policy configurations fragmented over disparate network devices. We adopt a bottom-up approach to extract all policies within network devices, abstract them into a structured data model, and with the use of clustering and information retrieval techniques, build an optimal solution to extract network-wide policy intents from the underlying network that eases policy management especially policy troubleshooting, reducing the configuration clutter and reducing the time taken to compile and resolve conflicts in policies.
- Published
- 2021
250. Towards Understanding the Performance of Traffic Policing in Programmable Hardware Switches
- Author
-
Amaury Van Bemten, Carmen Mas-Machuca, Nemanja Eeric, Amir Varasteh, and Wolfgang Kellerer
- Subjects
business.industry ,Computer science ,Bandwidth (computing) ,Key (cryptography) ,Traffic policing ,The Internet ,Kilobit ,Enhanced Data Rates for GSM Evolution ,Predictability ,business ,Networking hardware ,Computer network - Abstract
To provide the predictability required by emerging applications, operators typically rely on policing and/or shaping at the edge to ensure that tenants do not use excess bandwidth that was not accounted for. One of the promises of 6G is to deploy applications with strict predictability requirements across subnets and even over the Internet, where policing cannot be implemented in the end hosts. This paper presents an empirical study of the ability of modern programmable network devices to implement predictable traffic policing in the network. We find out that none of the five investigated hardware switches can provide accurate traffic policing, a key requirement for providing predictable service to applications. We observe that the switches let applications send more than what they should be allowed to, reaching up to 60% and 100% relative error for the rate and burst parameters. We further uncover the fact that switches cannot police arbitrarily low bursts, e.g., not less than 13 kilobit for one of our switches. We investigate how such limitations impact the performance of state-of-the-art solutions for predictable latency such as Chameleon. We observe that, for ensuring its predictable guarantees, Chameleon rejects around 50% of the tenants it could accommodate if switches were perfect, hence decreasing by the same ratio the revenue for the operator. Based on these observations, we discuss solutions toward more accurate and predictable policing in wide-area networks.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.