24,585 results
Search Results
2. Distributed Resource Allocation Under Mobile Edge Computing Networks: Invited Paper
- Author
-
Qimei Chen and Yang Yang
- Subjects
Base station ,Mobile edge computing ,Computer science ,business.industry ,Distributed computing ,Resource allocation ,Cloud computing ,Enhanced Data Rates for GSM Evolution ,Small cell ,business ,Spectrum management ,Scheduling (computing) - Abstract
Long-term evolution in unlicensed spectrum (LTE-U) can make use of centralized scheduling, interference coordination and other technologies to achieve better spectrum efficiency (SE). As the evolution of cloud computing, mobile edge computing (MEC) sinks the computing and storage capacity from the centralized data center to the edge of the network, which is the key technology to achieve low delay and high speed. However, the traditional centralized scheme in small cell networks (SCNs) structure will bring huge signal overheads. In order to adapt to the complex and changeable network system, this paper proposes a distributed resource and power allocation scheme under MEC environment, which enables small base stations (SBSs) to work autonomously. The SBSs need only little information exchange through the information cloud (IC) and finally obtain the global optimal SE. Simulation results confirm the correctness and effectiveness of the proposed scheme, and demonstrate the superiority of the distributed scheme over the centralized scheme.
- Published
- 2021
3. Eudoxus: Characterizing and Accelerating Localization in Autonomous Machines Industry Track Paper
- Author
-
Yanjun Zhang, Jie Tang, Shaoshan Liu, Yu Bo, Yuhao Zhu, Leimeng Xu, Boyuan Tian, Qiang Liu, Yiming Gan, and Wei Hu
- Subjects
010302 applied physics ,0209 industrial biotechnology ,Speedup ,Exploit ,Computer science ,business.industry ,Distributed computing ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Power budget ,Software framework ,020901 industrial engineering & automation ,Software ,0103 physical sciences ,Robot ,Hardware acceleration ,business ,computer ,FPGA prototype - Abstract
We develop and commercialize autonomous machines, such as logistic robots and self-driving cars, around the globe. A critical challenge to our—and any—autonomous machine is accurate and efficient localization under resource constraints, which has fueled specialized localization accelerators recently. Prior acceleration efforts are point solutions in that they each specialize for a specific localization algorithm. In real-world commercial deployments, however, autonomous machines routinely operate under different environments and no single localization algorithm fits all the environments. Simply stacking together point solutions not only leads to cost and power budget overrun, but also results in an overly complicated software stack.This paper demonstrates our new software-hardware co-designed framework for autonomous machine localization, which adapts to different operating scenarios by fusing fundamental algorithmic primitives. Through characterizing the software framework, we identify ideal acceleration candidates that contribute significantly to the end-to-end latency and/or latency variation. We show how to co-design a hardware accelerator to systematically exploit the parallelisms, locality, and common building blocks inherent in the localization framework. We build, deploy, and evaluate an FPGA prototype on our next-generation self-driving cars. To demonstrate the flexibility of our framework, we also instantiate another FPGA prototype targeting drones, which represent mobile autonomous machines. We achieve about $2 \times$ speedup and $4 \times$ energy reduction compared to widely-deployed, optimized implementations on general-purpose platforms.
- Published
- 2021
4. Brief Industry Paper: An Infrastructure-Aided High Definition Map Data Provisioning Service for Autonomous Driving
- Author
-
Jinliang Xie, Yanzhi Wang, Shaoshan Liu, Jie Tang, and Qi Zhu
- Subjects
Service (systems architecture) ,Knapsack problem ,Computer science ,Distributed computing ,Component (UML) ,Location awareness ,Provisioning ,Motion planning ,Data as a service ,computer.software_genre ,computer ,Data transmission - Abstract
As a fundamental component in the autonomous driving technology stack, High Definition Maps (HD map) provide high-precision descriptions of the environment. It enables extremely accurate perception and localization while improving the efficiency of path planning. However, the HD map’s extremely large data volume poses great challenges for the real-time and safety requirements of autonomous driving. Based on our real-world deployment experiences, we first demonstrate how the existing data transmission mechanism is weak in supporting HD map services. To address this problem, we propose an HD map data service mechanism on top of Vehicle-to-Infrastructure (V2I) data transmission under a tight time and energy budget. By this mechanism, the selected road side unit (RSU) nodes cooperate on map provisioning tasks and transmit HD map data proportionately. Furthermore, we model the real-time map data service into a partial knapsack problem and develop a greedy data transmission algorithm. Experimental results confirm that the proposed mechanism can ensure the real-time HD map data service meanwhile meeting the energy limits.
- Published
- 2021
5. DOORS: Distributed Object Oriented Runtime System (Position Paper)
- Author
-
Dorin Palanciuc
- Subjects
Object-oriented programming ,business.industry ,Computer science ,Distributed computing ,05 social sciences ,050301 education ,Distributed object ,Runtime system ,Software ,Doors ,Position paper ,0501 psychology and cognitive sciences ,State (computer science) ,Architecture ,business ,0503 education ,050104 developmental & child psychology - Abstract
The large software applications of today provide abstractions of the real-life systems that they support. A digital model of the system, and of the changes that occur within, are being maintained and updated, as triggered by real-life events. Morphologically, such applications contain several distinct architectural entities: databases holding the state, central components describing how the system reacts to external events and mechanisms through which the user can view the current state and issue new commands. Each of these entities may use distinct paradigms and employ different technologies. A production-ready software application ends up assembling a relatively high technology stack and provides the final abstractions for both the problem and its solution. In this paper we propose a short-circuit for the long chain of technologies that are usually employed in large, production-ready software applications. The resulting architecture is a distributed, message-based system which behaves as a hybrid between a database and a runtime environment. The system operates with persistent and live entities, encapsulating both state and operations and therefore easily assimilated with OOP classes.
- Published
- 2017
6. Short Paper: Towards Low-Cost Indoor Localization Using Edge Computing Resources
- Author
-
Janos Sallai, Shweta Khare, Abhishek Dubey, and Aniruddha Gokhale
- Subjects
Remote patient monitoring ,Computer science ,Distributed computing ,Short paper ,Real-time computing ,Navigation system ,020206 networking & telecommunications ,02 engineering and technology ,Software deployment ,Feature (computer vision) ,Histogram ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Edge computing ,Humanoid robot - Abstract
Emerging smart services, such as indoor smart parking or patient monitoring and tracking in hospitals, incur a significant technical roadblock stemming primarily from a lack of cost-effective and easily deployable localization framework that impedes their widespread deployment. To address this concern, in this paper we present a low-cost, indoor localization and navigation system, which performs continuous and real-time processing of Bluetooth Low Energy (BLE) and IEEE 802.15.4a compliant Ultra-wideband (UWB) sensor data to localize and navigate the concerned entity to its desired location. Our approach depends upon fusing the two feature sets, using the UWB to calibrate the BLE localization mechanism.
- Published
- 2017
7. Position Paper: Towards a Robust Edge-Native Storage System
- Author
-
Abhishek Chandra, Jon Weissman, and Nikhil Sreekumar
- Subjects
050101 languages & linguistics ,Edge device ,Computer science ,business.industry ,Distributed computing ,Node (networking) ,05 social sciences ,Elasticity (data store) ,Cloud computing ,02 engineering and technology ,Server ,Computer data storage ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Enhanced Data Rates for GSM Evolution ,business ,Edge computing - Abstract
Edge environments are generating an increasingly large amount of data due to the proliferation of edge devices. Accommodating this large influx of data at edge servers is a challenging issue. While some data can be processed as it is generated, others must be stored for later access. This paper proposes the features that a new edge-native storage system must possess including support for user mobility and node fluctuation. To motivate this, we first describe several emerging edge applications and their data needs. We then describe the challenges in meeting these needs. We then evaluate an out-of-the-box cloud storage system, Cassandra, to assess it’s suitability as an edge storage system due to many edge-friendly features. We determined that while a cloud-based storage system can be ported to the edge meeting some of the challenges, other challenges require new solutions. Based on the challenges and the results of Cassandra case study, we propose a set of design principles for a new edge-native storage system.
- Published
- 2020
8. Kubernetes in Fog Computing: Feasibility Demonstration, Limitations and Improvement Scope : Invited Paper
- Author
-
Paridhika Kayal
- Subjects
Decentralized computing ,Computer science ,Fog computing ,business.industry ,Software deployment ,Distributed computing ,Cloud computing ,Orchestration (computing) ,Microservices ,business ,Edge computing ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Fog computing (also known as edge computing) is a decentralized computing architecture that seeks to minimize service latency and average response time in IoT applications by providing compute and network services physically close to end-users. Fog environment consists of a network of fog nodes and IoT applications are composed of containerized microservices communicating with each other. Due to limited resources of fog nodes, it is often not possible to deploy all the containers of an application on a single fog node. Therefore, communicating containers need to be distributed on multiple fog nodes. Distribution and management of containerized IoT applications is always a critical issue to the system performance in a fog environment. Kubernetes, an open-source system, has grown into a container orchestration standard by simplifying the deployment and management of containerized applications. Despite the progress made by the academia and industry with respect to container management and the wide-scale acceptance of Kubernetes in cloud environments, container management in fog environment is still in the early stage in terms of research and practical deployment. This article aims to fill this gap by analyzing the expediency of Kubernetes container orchestration tool in the fog computing model. The paper also highlights limitations with the current Kubernetes approach and provide ideas for further research to adapt to the needs of the fog environment. Lastly, we provide experiments that demonstrate the feasibility and industrial practicality of deploying and managing containerized IoT applications in the fog computing environment.
- Published
- 2020
9. Towards Verification-Aware Knowledge Distillation for Neural-Network Controlled Systems: Invited Paper
- Author
-
Chao Huang, Wenchao Li, Qi Zhu, Jiameng Fan, and Xin Chen
- Subjects
SIMPLE (military communications protocol) ,Artificial neural network ,Computer science ,Distributed computing ,Control (management) ,02 engineering and technology ,010501 environmental sciences ,Lipschitz continuity ,01 natural sciences ,Nonlinear system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Formal verification ,0105 earth and related environmental sciences - Abstract
Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are challenging to formally verify for properties such as reachability due to the presence of nonlinear activation functions. In this paper, we make the observation that Lipschitz continuity of a neural network not only can play a major role in the construction of reachable sets for neural-network controlled systems but also can be systematically controlled during training of the neural network. We build on this observation to develop a novel verification-aware knowledge distillation framework that transfers the knowledge of a trained network to a new and easier-to-verify network. Experimental results show that our method can substantially improve reachability analysis of neural-network controlled systems for several state-of-the-art tools.
- Published
- 2019
10. Linear formulation for the design of elastic optical networks with squeezing protection and shared risk link group: Invited Paper
- Author
-
Karcius D. R. Assis, Brigitte Jaumard, Martin J. Reed, Helio Waldman, Raul C. Almeida, and Dimitra Simeonidou
- Subjects
Computer science ,Distributed computing ,Bandwidth (signal processing) ,Survivability ,020206 networking & telecommunications ,Topology (electrical circuits) ,02 engineering and technology ,Shared risk link group ,01 natural sciences ,010309 optics ,Resource (project management) ,Component (UML) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Routing (electronic design automation) ,Integer programming - Abstract
Survivability is an important component in the requirements in elastic optical networks (EONs). We examine here the significance of network survivability design against multiple-link failures under dedicated protection, shared risk link group (SRLG) and bandwidth squeezing schemes. The proposed mixed integer linear programming (MILP) formulation seeks to derive some different types of protection considering routing, spectrum assignment, grooming, modulation format as well as shared risk link group constraints. The proposed MILP provides efficient survivability results and resource savings (in terms of spectrum) for a full design of modern EONs.
- Published
- 2020
11. POSITION PAPER: Countering the Noise-Induced Critical Path Problem
- Author
-
Rogelio Long and Shirley Moore
- Subjects
Noise ,Noise induced ,Computer science ,Distributed computing ,Electronic engineering ,Position paper ,Critical path method ,Power (physics) - Abstract
As the number of cores grow in HPC systems, so does the effect of system noise on applications running on these systems. With the knowledge that future large-scale parallel computer systems, including exascale systems, will operate under an overall power bound, we claim to have found a solution that can counter the effects of noise. We present two methods that estimate the effects of noise on an application and then optimally redistributes power among nodes, such that the effects of noise are "hidden".
- Published
- 2016
12. Extending SOSJ Framework for Reliable Dynamic Service-Oriented Systems (Short Paper)
- Author
-
Udayanto Dwi Atmojo, Zoran Salcic, and Kevin I-Kai Wang
- Subjects
Service oriented systems ,computer.internet_protocol ,Computer science ,business.industry ,Model of computation ,Distributed computing ,Real-time computing ,Short paper ,Service-oriented architecture ,Automation ,Programming paradigm ,Software system ,Macro ,business ,computer - Abstract
This paper presents enhancements of Service Oriented SystemJ (SOSJ) framework, which extends a system-level language based on GALS model of computation SystemJ with services, into a new programming paradigm amenable for designing dynamic distributed automation systems such as reconfigurable manufacturing systems. The new paradigm combines correct-by-construction software systems development available in SystemJ with the dynamic features of service oriented architecture. The new approach introduces macro states into fundamental concurrent and distributed entities of SOSJ, called clock domains, which address typical behaviors in dynamic distributed systems. We showcase the use of the new paradigm on an example of reconfigurable manufacturing scenarios in a dynamic manufacturing system.
- Published
- 2015
13. IoT/CPS Ecosystem for Efficient Electricity Consumption : Invited Paper
- Author
-
Princy Johnson, Mohammad Alharthi, Saad Alharthi, and Chacko Jose
- Subjects
Smart system ,Industry 4.0 ,Computer science ,business.industry ,Smart meter ,020209 energy ,Distributed computing ,Big data ,Cyber-physical system ,02 engineering and technology ,Energy consumption ,Electric power system ,Smart grid ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,business - Abstract
Modern society relies on smart systems like internet of things (IoT) and cyber physical systems (CPS) to monitor and control physical processes. The widespread deployment of IoT and CPSs result in fast growth of sensor data as physical processes are constantly monitored by billions of IP-enabled sensors (44 zettabytes by 2020). Hence, fog nodes are deployed to make network edge rich in computing resources to enable real-time data analytics using artificial intelligence/machine learning (AI/ML) for Big data generated from IoT and CPSs. This paper proposes IoT/CPS ecosystem for smart grid (SG) utilizing industry 4.0 concept to manage and control the loads using an intelligent predictive controller based on artificial neural network (ANN). The ANN is trained to predict the loads in certain districts based on previous smart meter readings installed at consumers and substations. This is a novel approach which integrates IoT/CPSs ecosystem into electric power system to deliver energy to consumers with high efficiency, reduce the cost, optimize the energy consumption, improve the reliability and enable real-time monitoring of power consumption.
- Published
- 2019
14. Distributed Downloading Strategy for Multi-Source Data Fusion in Edge-Enabled Vehicular Network : (Invited Paper)
- Author
-
Zhu Han, Jun Wu, Yue Yu, Xiao Tang, BaekGyu Kim, and Tiecheng Song
- Subjects
Service (systems architecture) ,Optimization problem ,Mobile edge computing ,Computer science ,Distributed computing ,05 social sciences ,050801 communication & media studies ,Sensor fusion ,0508 media and communications ,0502 economics and business ,Overhead (computing) ,050211 marketing ,Enhanced Data Rates for GSM Evolution ,Intelligent transportation system - Abstract
Multi-source data fusion to support intelligent transportation system (ITS) is a promising service offered by mobile edge computing (MEC). With the fusion results delivered in near real-time, drivers or autonomous vehicles can peak around the corner, extend sensing range, reinforce and validate local observations to make safer and smarter driving decisions. However, downloading too much data increases the service delay thus undermines the fusion computing service performance. In this paper, we analyze the optimal downloading strategies of vehicles. By establishing the optimization indicator to monitor and evaluate fusion computing service, we use a hierarchical game, which is equivalent to a mathematical programming with equilibrium constraints (MPEC), to formulate the intersection between the MEC and vehicles. Through analysis, we transform the MPEC problem into a solvable single-layer optimization problem. We also provide an unpractical centralized approach, which has immense signaling overhead and exponentially-growing complexity, as a performance upper bound. Numerical results validate the theoretical analysis and demonstrate that the proposed downloading strategy has near-optimal performance in terms of system utility and service delay.
- Published
- 2019
15. Invited Paper: Distributed Computing in Cyber-Physical Intelligence: Robotic Perception as an Example
- Author
-
Dawei Feng, Huaimin Wang, Bo Ding, Hui Liu, Jie Xu, and Huaxi Zhang
- Subjects
0209 industrial biotechnology ,Robot kinematics ,Computer science ,media_common.quotation_subject ,Distributed computing ,Cyber-physical system ,Collective intelligence ,02 engineering and technology ,Simultaneous localization and mapping ,020901 industrial engineering & automation ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Research questions ,Architecture ,media_common - Abstract
Intelligent cyber-physical systems, such as robots, are emerging computing devices that autonomously and directly interact with the physical world. The new characteristics of these devices motivate further research questions different from those addressed in traditional computing technology. Based on an in-depth investigation of the relationship between a typical example, robotic perception, and distributed computing, this paper tries to explore the challenges and opportunities brought by intelligent cyber-physical systems to distributed computing. The preliminary answers to three questions are given: "Why should we introduce distributed computing into cyber-physical intelligence", "What kind of distributed architecture can contribute to cyber-physical intelligence" and "What challenges brought by intelligent cyber-physical systems to distributed computing infrastructure?" A multi-scale hybrid distributed architecture for cyber-physical intelligence named as Music as well as our initial practices towards enabling this architecture is also presented.
- Published
- 2019
16. Online optimization in the Non-Stationary Cloud: Change Point Detection for Resource Provisioning (Invited Paper)
- Author
-
Zhenhua Liu, Joshua Comden, and Jessica Maghakian
- Subjects
Computer science ,business.industry ,Distributed computing ,Big data ,Control (management) ,020206 networking & telecommunications ,Provisioning ,Cloud computing ,02 engineering and technology ,Resource (project management) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Spike (software development) ,Online algorithm ,business ,Change detection - Abstract
The rapid mainstream adoption of cloud computing and the corresponding spike in the energy usage of big data systems make the efficient management of cloud computing resources a more pressing issue than ever before. To this end, numerous online algorithms such as Receding Horizon Control and Online Balanced Descent have been designed. However it is difficult for cloud service providers to select the best control algorithm dynamically for resource provisioning when confronted with consumer resource demands that are notoriously unpredictable and volatile. Furthermore, it highly possible that it might not be the case for any one algorithm to consistently perform well over the months-long contract period. In this paper, we first exemplify the need to address non-stationarity in cloud computing by showcasing traces from MS Azure. We then develop a novel meta-algorithm that combines change point detection and online optimization. The new algorithm is shown to outperform existing solutions in real-world trace-driven simulations.
- Published
- 2019
17. Optimal Computation and Spectrum Resource Sharing in Cooperative Mobile Edge Computing Systems : (Invited Paper)
- Author
-
Qiaobin Kuang, Jie Xu, Xiang Chen, and Xiaowen Cao
- Subjects
Mobile edge computing ,Computer science ,Computation ,Distributed computing ,020206 networking & telecommunications ,020302 automobile design & engineering ,02 engineering and technology ,Energy consumption ,Shared resource ,Task (computing) ,0203 mechanical engineering ,Convex optimization ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Point (geometry) - Abstract
Mobile edge computing (MEC) systems face unevenly distributed communication and computation traffics over both time and space. In order to match with the traffics, it is beneficial for different neighboring MEC systems to cooperate in sharing their distributed communication and computation resources. This paper considers two neighboring MEC systems each with one access point (AP) serving one user, where each user can offload the computation tasks to the respective AP for remote execution. We propose a new joint computation and spectrum cooperation approach, such that the two systems can share their computation and spectrum resources to enhance their respective system performance. In particular, we minimize the weighted sum energy consumption (for both communication and computation) of the two MEC systems, by jointly optimizing the task offloading decisions at the users (for computation resource sharing) and the spectrum bands shared between the two systems. We obtain the optimal solution to the formulated problem in a semi-closed form by applying standard convex optimization techniques. Numerical results show that the proposed joint cooperation design significantly reduces the energy consumption of the two systems, as compared to other benchmark schemes without such joint cooperation.
- Published
- 2018
18. The Decentralized Voting Model Using the Hyperledger Platform Paper
- Author
-
Alexandr Kuznetsov, Oleksiy Shapoval, Dmytro Prokopovych-Tkachenko, Vadim Yakovenko, Sergii Kavun, and Nikolay Alexandrovich Poluyanenko
- Subjects
Structure (mathematical logic) ,Computer science ,Distributed computing ,Voting ,media_common.quotation_subject ,Schematic ,Architecture ,Peer-to-peer ,computer.software_genre ,Decentralization ,computer ,media_common - Abstract
It is presented theoretical statements about decentralized systems technology, blockchain networks, their structure as well as main functioning principles, structure and material about Hyperledger platform. It is examined Hyperledger services architecture, spheres of use and such important aspects as smart-contracts and their application, as well as advantages of using this platform. It is also modelled and explained the algorithm and schematic diagram of voting network built with Hyperledger platform.
- Published
- 2019
19. Coverage Analysis for Backscatter Communication Empowered Cellular Internet-of-Things : Invited Paper
- Author
-
Moe Z. Win, Maryam Hafeez, Syed Ali Raza Zaidi, and Des McLernon
- Subjects
Software deployment ,Computer science ,Node (networking) ,Distributed computing ,Monte Carlo method ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,Fading ,02 engineering and technology ,Wireless sensor network ,Dimensioning - Abstract
In this article, we develop a comprehensive framework to characterize the coverage probability of backscatter communication empowered cellular Internet-of-Things (IoT) sensor networks (SNs). The developed framework considers hierarchical cellular type deployment topology which is practically useful for various IoT applications. In contrast to existing studies, the framework is geared towards system level performance analysis. Our analysis explicitly considers the dyadic fading experienced by the links and spatial randomness of the network nodes. To ensure tractability of analysis, we develop novel closed-form bounds for quantifying the coverage probability of SNs. The developed framework is corroborated using Monte Carlo simulations. Lastly, we also demonstrate the impact of various underlying parameters and highlight the utility of the derived expressions for network dimensioning.
- Published
- 2019
20. Case Study: Radio Application in a Smart Grid System for a Brownfield Onshore Dispersed oil Field: Copyright Material IEEE, Paper No. PCIC-2018-40
- Author
-
Shah Niraj Kiritkumar, Nevil Herbert, and Kei Hao
- Subjects
Electric power system ,Engineering ,Smart grid ,Oscillography ,Link budget ,business.industry ,Network security ,Reliability (computer networking) ,Distributed computing ,Overhead (engineering) ,Electronics ,business - Abstract
Smart grid systems for onshore oil fields use real-time, high-speed field data from intelligent electronic devices (IEDs) to make fast and intelligent decisions that increase power system reliability and minimize power outages. For an efficient smart grid system, a reliable communications network to the IEDs and centralized controllers is critical for high-speed control and data collection. Communications networks that use conventional fiber networks are not always cost-effective for large dispersed oil fields, which have hundreds of miles of medium-voltage overhead distribution systems. This paper discusses the implementation of a smart grid system for a large dispersed oil field using radios as its communications network. The solution includes high-speed load shedding, online monitoring, event reporting, oscillography, and engineering access. This paper also discusses the process used to design and test the radio technology, including the evaluation of its success metrics, network security, radio path study, and optimization.
- Published
- 2018
21. A Research Paper of Existing Live VM Migration and a Hybrid VM Migration Approach in Cloud Computing
- Author
-
Poonam Saini, Abhishek ku. Shakva, Deepak Garg, and Prakash Ch. Nayak
- Subjects
Service (systems architecture) ,business.industry ,Computer science ,Quality of service ,Distributed computing ,Workload ,Cloud computing ,Virtualization ,computer.software_genre ,Cloud data center ,Memory management ,Server ,business ,computer - Abstract
With rapid increases in Cloud users and their workload, the Qos (quality of service) is decreasing proportionally. Providing better computing services, managing large number of cloud users and reducing time as well as energy on cloud data center can only be achieved by efficient live VM migration technique. Modern cloud computing service mainly depends upon live VM migration. In this paper we have reviewed all the existing live VM migration techniques with their advantages and disadvantages. Their working and application with future scope has been stated briefly. Comparisons have been done between Pre-copy and Post-copy VM Migration techniques by simulation corresponding to CPU uses, Memory and Network as parameters. Then a Hybrid VM Migration technique has been proposed and by comparison we proved it is better than the pre copy and post copy.
- Published
- 2018
22. Experiences and challenges in building next-gen optically disaggregated datacenters : (Invited Paper)
- Author
-
Andrea Reale and Dimitris Syrivelis
- Subjects
Interconnection ,Computer science ,business.industry ,Distributed computing ,Architecture ,Modular design ,Latency (engineering) ,business - Abstract
While disaggregation has been successfully used for more than two decades to separate storage from compute (i.e., SAN/NAS systems), disaggregating tightly coupled resources like CPU, memory or accelerators has been so far considered unfeasible due to the hard dependency between their performance and the latency and bandwidth provided by their physical interconnect. We argue that modern optical networks make full datacenter disaggregation feasible in practice. We present our ongoing work on disaggregation by introducing the dReDBox architecture that completely separates datacenter resources into modular physical units interconnected via a high speed reconfigurable optical network. We discuss the current prototype and highlight challenges and work ahead.
- Published
- 2018
23. A generic framework facilitating early analysis of data propagation delays in multi-rate systems (Invited paper)
- Author
-
Matthias Becker, Thomas Nolte, Moris Behnam, Saad Mubeen, and Dakshina Dasari
- Subjects
Semantics (computer science) ,business.industry ,Computer science ,Distributed computing ,Automotive industry ,02 engineering and technology ,Propagation delay ,020202 computer hardware & architecture ,Task (project management) ,Set (abstract data type) ,Work order ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Predictability ,Partially ordered set ,business - Abstract
A majority of multi-rate real-time systems are constrained by a multitude of timing requirements, in addition to the traditional deadlines on well-studied response times. This means, the timing predictability of these systems not only depends on the schedulability of certain task sets but also on the timely propagation of data through the chains of tasks from sensors to actuators. In the automotive industry, four different timing constraints corresponding to various data propagation delays are commonly specified on the systems. This paper identifies and addresses the source of pessimism as well as optimism in the calculations for one such delay, namely the reaction delay, in the state-of-the-art analysis that is already implemented in several industrial tools. Furthermore, a generic framework is proposed to compute all the four end-to-end data propagation delays, complying with the established delay semantics, in a scheduler and hardware-agnostic manner. This allows analysis of the system models already at early development phases, where limited system information is present. The paper further introduces mechanisms to generate job-level dependencies, a partial ordering of jobs, which need to be satisfied by any execution platform in order to meet the data propagation timing requirements. The job-level dependencies are first added to all task chains of the system and then reduced to its minimum required set such that the job order is not affected. Moreover, a necessary schedulability test is provided, allowing for varying the number of CPUs. The experimental evaluations demonstrate the tightness in the reaction delay with the proposed framework as compared to the existing state-of-the-art and practice solutions.
- Published
- 2017
24. A review paper on sensor deployment techniques for target coverage in wireless sensor networks
- Author
-
Manish Kumar and Vrinda Gupta
- Subjects
Engineering ,Artificial immune system ,business.industry ,Distributed computing ,010401 analytical chemistry ,02 engineering and technology ,01 natural sciences ,0104 chemical sciences ,Key distribution in wireless sensor networks ,Software deployment ,Sensor node ,Scalability ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Mobile wireless sensor network ,020201 artificial intelligence & image processing ,business ,Wireless sensor network ,Computer network - Abstract
Wireless sensor network are widely used in different area application, especially in the field of surveillance and monitoring tasks. So for the implementation in practical life many constraint and challenge arise. In the present paper we are focused on one of main such constraint called deployment of sensor node for better coverage. In this paper we review on the different techniques on deployment like artificial bee colony, particle swarm optimization, genetic algorithm, artificial immune system and compare them on different aspects like complexity, scalability, region of interest, suitability to static or mobile wireless sensor network and applicability to multiple objective etc. We also present pseudo codes for each planned algorithm.
- Published
- 2016
25. Distributed intelligence: Unleashing flexibilities for congestion management in smart distribution networks (Invited paper)
- Author
-
P.H. Nguyen, A.N.M.M. Haque, and T.H. Vo
- Subjects
Flexibility (engineering) ,Engineering ,Operations research ,business.industry ,020209 energy ,Distributed computing ,02 engineering and technology ,Grid ,Task (project management) ,Demand response ,Load management ,Control system ,Distributed generation ,0202 electrical engineering, electronic engineering, information engineering ,Systems architecture ,business - Abstract
Electrical distribution networks worldwide are facing frequent capacity challenges due to the widespread roll out of various distributed energy resources (DERs). A number of demand response (DR) mechanisms have been developed in order to circumvent the problems and enhance the flexibility of the distribution network. While the existing centralized control system remains its crucial role for reliable and secure grid operation, distributed intelligence is a complement technology with a focus on dividing the control task into a number of simpler problems and solve them with minimum exchange of information. Based on the recent developments of distributed intelligence, this paper discusses a decentralized approach to enable demand response for managing the congestions more efficiently. The approach is validated with simulations for representative Dutch low-voltage (LV) networks.
- Published
- 2016
26. Optimal state scheduling for three-hop directed cascaded networks with time division duplexing (invited paper)
- Author
-
Liansun Zeng, Yanli Xu, Feng Liu, and Conggai Li
- Subjects
Linear programming ,Computer science ,Distributed computing ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,05 social sciences ,MIMO ,050801 communication & media studies ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Topology ,Bottleneck ,law.invention ,Scheduling (computing) ,Hop (networking) ,0508 media and communications ,Relay ,law ,Wireless relay network ,Optimal scheduling ,0202 electrical engineering, electronic engineering, information engineering - Abstract
Three-hop cascaded directed wireless relay network is considered, where relay nodes work in time division duplexing (TDD) mode. We investigate the optimal scheduling for capacity achieving with decode-and-forward (DF) strategy. By defining feasible network state (FNS), we formulate the problem into linear program, which schedules the activeness of all FNSs. Closed-form solution is obtained for general three-hop configuration, showing that DF is optimal to achieve the network capacity min{C 1 C 2 /C 1 +C 2 } = min(C 1 , C 3 )C 2 /min(C 1 , C 3 )+C 2 , where C k is the link capacity of the hop k. We also give the scheduling demonstration. Performance analyses are provided, including the degree of freedom (DoF) in multiple-input multiple-output (MIMO) scenario. Moreover, if extra single-hop message is allowed the network can support a higher DF rate as max(C 1 , C 3 )C 2 /min(C 1 , C 3 )+C 2 . The ultimate results with infinite relay capability are further deduced, which shows that the three-hop network can be superior to the corresponding two-hop scenario and indicates that the massive MIMO technique can be applied at the relay nodes to overcome the bottleneck of half duplexing constraint.
- Published
- 2017
27. Student Research Paper: Evaluation of the Dependability of Critical Infrastructures Using Hybrid Petri Nets with Random Variables and Stochastic Simulation
- Author
-
Carina Pilch
- Subjects
Computer science ,Distributed computing ,020207 software engineering ,02 engineering and technology ,Process architecture ,Petri net ,Statistical model checking ,Reliability engineering ,Stochastic simulation ,0202 electrical engineering, electronic engineering, information engineering ,Stochastic Petri net ,Dependability ,020201 artificial intelligence & image processing ,Student research ,Random variable - Abstract
The focus of my Phd project is on the evaluation of the dependability of critical infrastructures, e.g. energy distribution and gas networks. The modeling formalism of hybrid Petri nets with random variables forms the basis of our modeling approach. We plan to rigorously evaluate dependability using discrete-event simulation and methods of Statistical Model Checking and extend existing approaches with methods of rare-event simulation.
- Published
- 2017
28. Constraint-based configuration table generator for reliable path routing and safe timeslot allocation in SpaceWire network: Session: Networks & protocols, short paper
- Author
-
Satoshi Yamazaki, Yu Otake, Yasuhiro Sota, Hiroki Hihara, Takahiko Tanaka, and Toshio Tonouchi
- Subjects
010504 meteorology & atmospheric sciences ,business.industry ,Computer science ,Payload ,Network packet ,Distributed computing ,010502 geochemistry & geophysics ,Network topology ,01 natural sciences ,SpaceWire ,Bandwidth (computing) ,Table (database) ,Routing (electronic design automation) ,business ,Constraint satisfaction problem ,0105 earth and related environmental sciences ,Computer network - Abstract
SpaceWire is valuable because it facilitates the development of spacecraft subsystems such as payload instruments, mass memory, and onboard computers. On the other hand, it takes much time and effort for developers to configure an initiator of the SpaceWire network because they have to take account of the entire SpaceWire network in a spacecraft. As the target network becomes larger, the path addressing and the packet collision-free timeslot allocation are harder for the developers to configure. Furthermore, the configuration tables of the initiator should satisfy various constraints, such as the bandwidth limitation and priority of specific packets. These constraints are different in each spacecraft. In order that the developers can design the large-scale SpaceWire network efficiently, automatic configuration table generation under the constraints is indispensable. This paper presents a constraint-based configuration table generator (CTG) that automatically provides reliable redundant path routing and collision-free timeslot allocation for required transactions in the target topology. We apply a constraint solver to the CTG to set many kinds of user-defined constraints in the network. For example, the bandwidth limitation, priority of the packets, and other various constraints can be easily inputted into the CTG. The CTG automatically generates configuration tables satisfying these constraints. Additionally, the CTG reports network topology views with bandwidth utilization ratios. This helps developers to verify whether a generated configuration is just as designed. The CTG can also notify developers that their requirements cannot be solved. In this paper, we show the feasibility and effectiveness of this tool through evaluation using a large-scale SpaceWire network case.
- Published
- 2016
29. Towards a model-based verification methodology for Complex Swarm Systems (Invited paper)
- Author
-
Nils Przigoda, Robert Wille, Rolf Drechsler, and Jonas Gomes Filho
- Subjects
Scheme (programming language) ,System of systems ,Engineering ,business.industry ,Distributed computing ,Real-time computing ,Swarm behaviour ,Task (project management) ,Constant (computer programming) ,Software deployment ,visual_art ,Electronic component ,visual_art.visual_art_medium ,business ,Model based verification ,computer ,computer.programming_language - Abstract
The recent advances with respect to the costs, size, and power consumption of electronic components paved the way for System of Systems (SoS), Cyber-Physical Systems (CPS), or the Internet of Things (IoT). As a next stage, these developments currently motivate the consideration of Complex Swarm Systems (CSS), i.e., continuously running systems that will dynamically change after deployment and are connected by heterogeneous components which can join and leave the system at any time. Due to this dynamic nature and the constant reconfigurations, it is not possible to completely verify those systems with conventional verification methods anymore. Therefore, we propose a new methodology which follows a different scheme: Instead of trying to verify all possible behavior of a CSS (which, due to the vast number of possible instantiations or connections of the heterogeneous components, becomes an impracticable task anyway), we aim for verifying that, at least, no scenario which violates certain (safety-critical) forbidden actions is possible. To this end, solutions for model-based verification are employed. By means of a case study, the feasibility and promises of the proposed methodology are illustrated.
- Published
- 2016
30. On the Break-Even Point between Cloud-Assisted and Legacy Routing (Short Paper)
- Author
-
Prasun Kanti Dey and Murat Yuksel
- Subjects
Routing protocol ,Dynamic Source Routing ,Static routing ,Equal-cost multi-path routing ,Computer science ,Routing table ,Distributed computing ,05 social sciences ,Policy-based routing ,050801 communication & media studies ,02 engineering and technology ,0508 media and communications ,Link-state routing protocol ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Forwarding plane - Abstract
As more than 40K service providers are advertising 600K or more IP prefixes, scalability of routing has emerged to be a matter of great concern. In this paper, to explore a spectrum of designs, we consider a Cloud-Assisted Routing (CAR) framework which follows a hybrid and opportunistic approach by keeping the high priority tasks at the router and use an adaptive router-cloud integration when beneficial. In particular, it maintains most of the control plane functions at the cloud and least of it at local router and vice versa for the data plane. Comparing the performance and monetary cost benefits between CAR, we discuss: i) What is the break-even point? ii) What are the key components of CAR to be monetarily beneficial? iii) What are the constraints that will make the traditional routing favorable than CAR?
- Published
- 2016
31. Model-Driven Policy Framework for Data Centers (Short Paper)
- Author
-
Angelos Mimidis, Cosmin Caba, and José Soler
- Subjects
Computer science ,business.industry ,Quality of service ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Data modeling ,Configuration Management (ITSM) ,020210 optoelectronics & photonics ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Operating system ,Data center ,Orchestration (computing) ,Routing (electronic design automation) ,business ,computer ,Policy-based management - Abstract
Data Centers (DCs) continue to become increasingly complex, due to comprising multiple functional entities (e.g. routing, orchestration). Managing the multitude of interconnected components in the DC becomes difficult and error prone, leading to slow service provisioning, lack of QoS support, etc. Moreover, the lack of simple solutions for managing the configuration and behavior of the DC components makes the DC hard to configure and slow in adapting to changes in business needs. In this paper, we propose a model-driven framework for policy-based management for DCs, to simplify not only the service provisioning but also the configuration management of the various DC components. The implemented prototype is presented and a series of tests are performed to assess its performance and to gain key insights about policy based management.
- Published
- 2016
32. Validation of Non-functional Requirements in Cloud Based Systems (Short Paper)
- Author
-
Rashmi Phalnikar
- Subjects
Graph rewriting ,Non-functional requirement ,Database ,Computer science ,computer.internet_protocol ,business.industry ,Process (engineering) ,Aspect-oriented programming ,Distributed computing ,020207 software engineering ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Unified Modeling Language ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Relevance (information retrieval) ,business ,computer ,XML ,computer.programming_language - Abstract
Cloud computing that is based on Infrastructure as a Services model, allows the system administrators to identify resources or services to deploy their applications. Selection of the most appropriate provider for a particular application is a difficult task as there are huge numbers of services offered by considerable number of providers that are not directly comparable and have similar functionality. Additionally these systems must comply with non-functional requirements as they evolve. Certain traditional techniques are employed to verify the non-functional properties of critical systems at design time. In this paper, a new framework is suggested that supports the process of deploying applications in cloud providers, using a comparison of high level requirements predominantly non-functional requirements (NFR). Experiments have already been conducted with conclusive results wherein matching of user constraints during service selection process is done. The experiment uses Aspect Oriented Paradigm (AOP), XML and graph transformation. The relevance of the framework is illustrated using the Remote Patient Monitoring (RPM) scenario. The results demonstrate that the model considerably improves the process by identifying conflicting NFR. Based on this work done, the next logical step is to extend it to Cloud Computing.
- Published
- 2016
33. Deterministic services for SpaceWire networks: SpaceWire networks and protocols, long paper
- Author
-
Elena Podgornova, Yuriy Sheynin, Irina Lavrovskaya, and Valentin Olenev
- Subjects
Job shop scheduling ,Spacecraft ,business.industry ,Computer science ,Distributed computing ,020208 electrical & electronic engineering ,Time division multiple access ,02 engineering and technology ,SpaceWire ,Scheduling (computing) ,0202 electrical engineering, electronic engineering, information engineering ,Space industry ,business ,Communications protocol ,Computer network ,Data transmission - Abstract
Deterministic behavior is an important paradigm for verification and validation of real-time systems such as those on crewed space vehicles and robotic spacecrafts. Providing deterministic characteristics of the data transfer for the spacecraft that uses the SpaceWire technology is an essential problem, especially for autonomous vehicles like satellites. Deterministic data delivery guarantees that transmission of data from one node of the onboard network to the target node would not take longer than the specified time period. Such task is solved by using specific communication protocols that include a scheduling service. Modern space industry demands a protocol running over SpaceWire, which can provide deterministic data transmission characteristics. The scheduling problem becomes more complicated, when we consider a number of communication protocols simultaneously operating in every node of the network, e.g. RMAP, STP-ISS, CCSDS PTP. Traffic from different transport protocols can interfere especially while getting access to the SpaceWire link in a node. The paper presents Multiprotocol Scheduling Service — a new scheduling protocol for SpaceWire networks which provides deterministic data delivery in a network and performs arbitration of data coming from several transport protocols. Firstly, we give an overview of TDMA-based network protocols that have been developed for the ground-based and onboard networks. Then, we present Multiprotocol Scheduling Service which is based on the STP-ISS scheduling mechanism and extended with additional features.
- Published
- 2016
34. On Load Management in Service Oriented Networks (Short Paper)
- Author
-
Steven Davy, Micheal Crotty, Ehsan Elahi, Sander Vrijders, Dimitri Staessens, Rashid Mijumbi, Jason Barron, and Miguel Ponce de Leon
- Subjects
Computer science ,computer.internet_protocol ,business.industry ,Quality of service ,Distributed computing ,020206 networking & telecommunications ,Round-robin DNS ,02 engineering and technology ,Service-oriented architecture ,Load balancing (computing) ,01 natural sciences ,010104 statistics & probability ,Load management ,Server ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,business ,Internetworking ,computer ,Queue ,Computer network - Abstract
In Service-Oriented Architecture (SOA), dedicatedintermediate nodes called load balancers are usually deployed in data centers (DC) in order to balance the load among multiple instances of an application service and to optimize the resource utilization. However, the addition of these nodes increases the installation and operational cost of DCs. These load balancers distribute incoming flows to multiple outgoing ports usually by hashing them. Several techniques are used in order to select the outgoing ports e.g. round robin, queue length, feedback from neighbors etc. Such load balancing approaches do not considergetting live feedback from the service end and therefore are not able to dynamically change the amount of allocated resources. In this paper, a distributed load management scheme isproposed for service oriented networks based on the currentInternet architecture. In this scheme, lightweight interconnected management agents are used to decide the availability for a particular service instance and help in optimal distribution of the flows. The proposed scheme can also be applied in other emerging internetworking architectures such as RINA.
- Published
- 2016
35. Invited paper: Fast multi-channel Gibbs-sampling for clustering in cloud-based radio access networks
- Author
-
Ness B. Shroff, Xiaojun Lin, and Saurabh Misra
- Subjects
010302 applied physics ,Radio access network ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Reuse ,01 natural sciences ,symbols.namesake ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Leverage (statistics) ,Cluster analysis ,business ,Random matrix ,Multi channel ,Gibbs sampling - Abstract
In this paper, we study how to cluster Remote Radio Heads (RRHs) into Virtual Base-Stations (VBSs) in a Cloud-based Radio Access Network to optimally manage the tradeoff between improving the performance of cell-edge users and maintaining high spatial reuse for the overall system. We develop Gibbs-sampling based algorithms that can find the desirable global VBS configuration from an arbitrarily given set of allowable VBS configurations. While Gibbs-sampling has been used to solve other wireless control problems, its application to VBS clustering faces new challenges both due to the difficulty in estimating the quality of a VBS configuration under rapid channel variations, and due to a new global coupling effect. We leverage Random Matrix Theory to develop a method that can quickly estimate the quality of a VBS configuration based only on average channel statistics. Further, we use perturbation analysis to develop a distributed approximation of the Gibbs sampler to circumvent the global coupling effect, which then allows different parts of the network to search for better VBS configurations in parallel. Our numerical results demonstrate how the proposed algorithm can be used as a general tool to evaluate the system performance under a variety of clustering constraints.
- Published
- 2016
36. A graphical method to configure SpaceWire networks: SpaceWire networks and protocols, long paper
- Author
-
Alin Albu-Schaffer and Thomas Bahls
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Distributed computing ,05 social sciences ,050801 communication & media studies ,Topology (electrical circuits) ,02 engineering and technology ,Modular design ,Complex network ,Network topology ,SpaceWire ,020901 industrial engineering & automation ,0508 media and communications ,Embedded system ,Component (UML) ,Systems design ,business ,Heterogeneous network - Abstract
Complex robotic systems like the DLR Hand Arm System integrates a huge amount of sensors and actuators. Hence system design and especially communication infrastructure design has to be flexible in a heterogeneous network of different bus systems. As basis, a modular electronic concept as well as a well-structured communication concept is necessary [1]-[3]. SpaceWire suites well to these requirements since on one hand it supports arbitrary topologies from point to point up complex network structures and on the other hand it is easy to implement and has a small footprint. Additionally its logical and regional addressing scheme enables changes in the topology during runtime simply by reprogramming the routing switches. However, such changes require expert knowledge. This work presents a graphical method to setup and configure SpaceWire network topologies. This enables non-experts to replace or integrate new components to the system or to set up a test bed to investigate a specific aspect. The developer provides a GraphML description [4] specifying the SpaceWire communication capabilities of each component. Thus the user is able to adapt the SpaceWire network topology or to set up a new one simply by merging the different GraphML descriptions of the used components. A post process is afterwards used to analyze the GraphML description and to generate the necessary configuration messages according to the topology. This enables faster development cycles and rapid prototyping. The approach is approved and explained using the SpaceWire network topology of the DLR Hand Arm System.
- Published
- 2016
37. A review paper of an encryption scheme using network coding for energy optimization in MANET
- Author
-
Fenil Khatiwala and Shruti Patel
- Subjects
020203 distributed computing ,Computer science ,business.industry ,Network packet ,Distributed computing ,05 social sciences ,050301 education ,02 engineering and technology ,Mobile ad hoc network ,Energy consumption ,Encryption ,Permutation ,Linear network coding ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,40-bit encryption ,business ,0503 education ,Computer Science::Cryptography and Security ,Computer network - Abstract
Mobile ad-hoc network is a collection of mobile nodes which forms an instant network without fixed topology and can be arranged dynamically. Energy saving and security are important issue in MANET. Network coding technique is used to reduce energy consumption by less transmissions in MANET. To achieve a security, there are many encryption scheme are available. Out of which p-coding technique is lightweight encryption scheme which provides confidentiality. P-coding is to let the source randomly permute the symbol of each packet. So eavesdropper can't obtain the meaningful information without knowing the permutation encryption function and coding vector.
- Published
- 2016
38. Review paper on implementation of multipath reactive routing protocol in manet
- Author
-
Amol K. Sapkal and Manish Y. Barange
- Subjects
Zone Routing Protocol ,Dynamic Source Routing ,business.industry ,Computer science ,Distributed computing ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Wireless Routing Protocol ,020302 automobile design & engineering ,02 engineering and technology ,Ad hoc wireless distribution service ,0203 mechanical engineering ,Link-state routing protocol ,Optimized Link State Routing Protocol ,Multipath routing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Destination-Sequenced Distance Vector routing ,business ,Computer network - Abstract
In a mobile ad hoc network (MANET) there is a collection of wireless devices which moving in seemingly adventitious directions and communicating with one another without establishing the real infrastructure. Communicating nodes in a Mobile Ad hoc Network normally seek the help of other intermediate nodes to establish communication channels. Thus, the communication may be via many intermediate nodes from source to destination. Multi-path routing is better one than the single path routing in mobile ad hoc networks, this is because many path routing allows the lay foundation of many path between a single source and single destination node. But in multipath routing, there is a problem of overhead management and transportation performance. So the aim of this work is to design such a wireless system which uses reactive multipath routing protocol who gives better data transportation performance than baseline protocols. Also it improves throughput and packet delivery ratio with reduction in overhead and end to end delay. The proposed approach contains implementation of better routing protocol which provide proper route updates, set require parameters at proper value, generate wireless network which has low error rate and fast packet generation. We used ns-2 to simulate. Simulation results will show better data transportation performance than baseline protocols. Also it will show improvements in throughput and packet delivery ratio with reduction in overhead and end to end delay.
- Published
- 2016
39. Architecture for free space optical networks (invited paper)
- Author
-
Vincent W. S. Chan
- Subjects
Network architecture ,business.industry ,Computer science ,Automatic repeat request ,Distributed computing ,Big data ,Throughput ,Physics::Fluid Dynamics ,Channel capacity ,Architecture ,Layer (object-oriented design) ,business ,Block (data storage) ,Computer network - Abstract
Free-space optical networks have serious block erasures due to atmospheric turbulence and require an entirely new network architecture from Layer 1 to Layer 4 for big data flows.
- Published
- 2015
40. Automated Deployment and Parallel Execution of Legacy Applications in Cloud Environments (Short Paper)
- Author
-
Steffen Herbold, Michael Gottsche, Jens Grabowski, and Fabian Glaser
- Subjects
Speedup ,business.industry ,Computer science ,Distributed computing ,Legacy system ,020207 software engineering ,Provisioning ,Cloud computing ,02 engineering and technology ,Single-chip Cloud Computer ,Software deployment ,Cloud testing ,0202 electrical engineering, electronic engineering, information engineering ,Web application ,business - Abstract
Cloud Computing has long extended beyond the original focus of providing scalable on-demand resources for web applications and is now also ubiquitous in batch-style data processing applications. Employing Cloud services for data analysis tasks is also a viable alternative for researchers who are limited by their locally available compute power and in the need for timely execution. However, the provisioning and deployment of machines and applications at Infrastructure-as-a-Service (IaaS) providers is a non-trivial task for the average scientist. Within this paper, we propose a framework for automating the deployment and execution of existing applications in a data-parallel fashion in Cloud environments with only negligible effort by the user. Our evaluation of a real-world scientific use case exhibits a significant speedup compared to local execution.
- Published
- 2015
41. Event-Based Monitoring of Service-Oriented Smart Spaces (Invited Paper)
- Author
-
Sam Guinea and Luciano Baresi
- Subjects
Service (systems architecture) ,Computer science ,Filter (video) ,business.industry ,Event based ,Distributed computing ,Real-time computing ,Smart spaces ,Complex event processing ,Context (language use) ,Space (commercial competition) ,business ,Building automation - Abstract
Cities, buildings, and spaces are smart when their "behavior" properly mimics their contexts, accommodating changes and evolving situations. This is only possible if they are able to collect data from the environment using distributed probes, and to filter, correlate, and analyze them appropriately. In this paper we adopt service-based solutions to abstract the probes that are distributed in the environment, as well as to provide the analysis capabilities required to make timely analyses on how a space is evolving. In such scenarios the amount of sensor data that one might need to deal with can be extremely high. Therefore, we provide an infrastructure that can manipulate high degrees of raw sensor data efficiently. The solution is based on complex event processing and recurring patterns, and is exemplified in the context of smart buildings.
- Published
- 2015
42. Net2Plan: An open-source multilayer network planning tool and in-operation simulator (Demo paper)
- Author
-
Jose-Luis Izquierdo Zaragoza and Pablo Pavon-Marino
- Subjects
Network planning and design ,Software ,Network resource planning ,Event (computing) ,business.industry ,Computer science ,Distributed computing ,Resource allocation ,Transparency (human–computer interaction) ,business ,Resilience (network) ,Simulation ,Network simulation - Abstract
In this paper we describe the main features of the open-source Net2Plan tool. Built on top of a technology-agnostic vendor-neutral multilayer network representation, Net2Plan is designed to assist users in the evaluation of built-in or original user-developed planning algorithms. In addition, users can analyze their designs using either reports or a simulation tool for in-operation scenarios like network resilience, connection-admission-control, time-varying traffic resource allocation, or even combinations of them, using built-in or custom event generators or reaction algorithms. We motivate how a paradigm shift to an open-source view of network planning emphasizes the power of distributed peer-review, collaboration cycles and transparency to create high-quality software at an accelerated pace and lower cost.
- Published
- 2015
43. Different technique of load balancing in distributed system: A review paper
- Author
-
Riyazuddin Khan, Mohd. Shahid Husain, and Mohd Haroon
- Subjects
Load management ,Network Load Balancing Services ,Computer science ,Distributed computing ,High availability ,Server ,Scalability ,Round-robin DNS ,Central processing unit ,Load balancing (computing) - Abstract
Load in network is rapidly increases day by day by the user input or by the machine communication, so it will affect the performance of the computing nodes, if we will not justify the load of the computing node, it means we will not consider a desirable output from the computing node, load of the computing node is calculated on the basis of queue length and available memory. The main objective of this work is to improve the throughput as well as minimized the CPU time. Load balancing can be defined as a method of improving the performance of a distributed and parallel system by redistributing the load among the processors. Load balancing is control word wide wave for increasing demands so fast and Precise Load balancing is an important point in increasing efficiency of distributed system operation. But it is a difficult task in very large scale distributed system because the distributed systems always have dynamically changed global state. For high availability, throughput, and scalability the distributed system needs better and efficient load balancing. In this paper, we present an overview of the different techniques and methodology given by different researchers throughout the globe and try to analyse their effectiveness in improving the methods of load balancing.
- Published
- 2015
44. Call for Papers.
- Subjects
- *
LOCAL area networks , *HIGH performance computing , *DISTRIBUTED computing , *ARTIFICIAL intelligence , *PRIVATE networks - Abstract
The article presents the discussion on paradigm of virtualization evolving from a virtualized local area network (LAN) and private networks to the solidification of network function virtualization (NFV). Topics include coupled with the pervasive utilization of artificial intelligence (AI) at all network levels; and the Digital Twin (DT) paradigm recently deemed as a promising tool for network design, optimization, management.
- Published
- 2022
- Full Text
- View/download PDF
45. DNA: An SDN framework for Distributed Network Analytics (Demo Paper)
- Author
-
Ganesan Rajam, V. Anbalagan, Mouli Chandramouli, Ashwin Pankaj, Joe Zhang, Nitish Gupta, Manjunath Patil, Alexander Clemm, Yifan Zhang, and Robert Lerche
- Subjects
Web analytics ,Software analytics ,business.industry ,Analytics ,Computer science ,Proof of concept ,Distributed computing ,Big data ,Business intelligence ,business ,Bottleneck ,Networking hardware ,Computer network - Abstract
Analytics of network telemetry data helps address many important operational problems. Traditional Big Data approaches run into limitations even as they push scale boundaries for processing data further. One reason for this is the fact that in many cases, the bottleneck for analytics is not analytics processing itself but the generation and export of the data on which analytics depends. The amount of data that can be reasonably collected from the network runs into inherent limitations due to bandwidth and processing constraints in the network itself. In addition, management tasks related to determining and configuring which data to generate lead to significant deployment challenges. In order to address these issues, we propose a novel distributed solution to network analytics that we have implemented as a proof of concept, called DNA (Distributed Network Analytics). In DNA, analytics processing is performed at the source of the data by specialized agents embedded within network devices, which also dynamically set up and reconfigure telemetry data sources as required by an analytics task. An SDN controller application orchestrates network analytics tasks across the network to allow users to interact with the network as a whole instead of individual devices one at a time. Our demonstration of DNA includes a GUI front end used to specify and monitor analytics tasks as well as visualize analytics results.
- Published
- 2015
46. A Survey Paper on Task Scheduling Methods in Cluster Computing Environment for High Performance
- Author
-
Harvinder Singh and Gurdev Singh
- Subjects
Rate-monotonic scheduling ,Job shop scheduling ,Computer science ,Distributed computing ,Symmetric multiprocessor system ,Dynamic priority scheduling ,Round-robin scheduling ,Supercomputer ,Swarm intelligence ,Fair-share scheduling ,Scheduling (computing) ,Fixed-priority pre-emptive scheduling ,Computer cluster ,Two-level scheduling - Abstract
Parallel computing perform concurrently execution tasks on distributed nodes. Large application split up into tasks and run on number on nodes for high performance computing. Cluster environment composed with heterogeneous devices and software components capable of cost effective and high performance computing on parallel application. In heterogeneous cluster environment proper scheduling of tasks and allocation to nodes is important for high performance computing. Many task scheduling algorithms proposed for achieving high performance computing on heterogeneous computing. We proposed swarm technique methodology for task scheduling on heterogeneous cluster environment, which can perform better results for minimize make span, high performance and resource utilization compared to others.
- Published
- 2015
47. Short Paper: Performance evaluation on distributed data process in M2M systems
- Author
-
Izuru Sato, Ken-ichi Fukuda, and Masafumi Katoh
- Subjects
Distributed design patterns ,Data processing ,SIMPLE (military communications protocol) ,Queue management system ,Distributed database ,Distributed algorithm ,business.industry ,Computer science ,Distributed computing ,Next-generation network ,business ,Computer network ,Data modeling - Abstract
We show that distributed data processing for M2M applications improves the performance of M2M systems. The important roles of the M2M systems are to associate an application with required data and to execute the application on suitable processing entities for coming ubiquitous environment. First, we show simple rules on deployment of data process to reduce load in underlying M2M network. Next, we clarify cases such that an application should be divided into subroutines for distributed executions. And so, we model centralized and distributed execution as multi-stage queuing system to show the improvement by the distributed execution.
- Published
- 2015
48. Guest Editorial.
- Author
-
Zhai, Jidong, Si, Min, and Pena, Antonio J.
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,PARALLEL programming ,LEARNING ability ,MACHINE learning - Abstract
This special section focuses on the state-of-the-art technologies on parallel and distributed computing techniques for artificial intelligence (AI), machine learning (ML), and deep learning (DL). AI, ML, and DL can enable computers the ability to learn from a large amount of data and use the learned model to optimize a complex problem or discover rules in a complicated system. AI, ML and DL can be applied to push forward the boundaries for many domains and significantly influence our daily life. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. A Taxonomy of Job Scheduling on Distributed Computing Systems.
- Author
-
Lopes, Raquel V. and Menasce, Daniel
- Subjects
PRODUCTION scheduling ,DISTRIBUTED computing ,GRID computing ,CLOUD computing ,LABOR costs - Abstract
Hundreds of papers on job scheduling for distributed systems are published every year and it becomes increasingly difficult to classify them. Our analysis revealed that half of these papers are barely cited. This paper presents a general taxonomy for scheduling problems and solutions in distributed systems. This taxonomy was used to classify and make publicly available the classification of $109$
scheduling problems and their solutions. These $109$- Published
- 2016
- Full Text
- View/download PDF
50. Editorial: Introduction to the Issue on Distributed Machine Learning for Wireless Communication.
- Author
-
Yang, Ping, Dobre, Octavia A., Xiao, Ming, Renzo, Marco Di, Li, Jun, Quek, Tony Q. S., and Han, Zhu
- Abstract
The papers in this special section focus on the use of distributed machine learning for wireless communications. With the emergence of new application scenarios (e.g., real-time and interactive services and Internet of Things) and the fast development of smart terminals, wireless data traffic has increased drastically, and the existing wireless networks cannot completely meet the technical requirements of the next generation mobile communication networks, e.g., 6G. In recent years, machine learning-based methods have been considered as potential technologies for 6G, because in wireless communication systems, key issues behind synchronization, channel estimation, signal detection, and iterative decoding can be solved by well-designed machine learning algorithms. Currently, most wireless network machine learning solutions require the training data and learning process to be centralized in one or more data centers. However, these centralized machine learning methods expose disadvantages, e.g., privacy security, significant signaling overhead, increased implementation complexity, and high latency, which limit their practicality. The wireless networks of the future must make quicker and more reliable decisions at the network edge. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.