82 results on '"Eiji, Oki"'
Search Results
2. BPRIA: Crosstalk-Avoided Bi-Partitioning-Based Counter-Propagation Resource Identification and Allocation for Spectrally-Spatially Elastic Optical Networks
- Author
-
Bijoy Chand Chatterjee, Imran Ahmed, Abdul Wadud, Mukulika Maity, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
3. BPA: Approximation Batch-Processing Algorithm for Static Lightpath Requests in Elastic Optical Networks
- Author
-
Bijoy Chand Chatterjee, Basavaprabhu S, and Eiji Oki
- Subjects
General Medicine - Published
- 2022
4. Joint Inter-Core Crosstalk- and Intra-Core Impairment-Aware Lightpath Provisioning Model in Space-Division Multiplexing Elastic Optical Networks
- Author
-
Kenta Takeda, Takehiro Sato, Bijoy Chand Chatterjee, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
5. Robust Optimization Model for Primary and Backup Resource Allocation in Cloud Providers
- Author
-
Bijoy Chand Chatterjee, Takehiro Sato, Fujun He, Urushidani Shigeo, Takashi Kurimoto, and Eiji Oki
- Subjects
Optimization problem ,Computer Networks and Communications ,Computer science ,business.industry ,Heuristic (computer science) ,Distributed computing ,Robust optimization ,Cloud computing ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,Virtual machine ,Backup ,Resource allocation ,business ,computer ,Integer programming ,Software ,Information Systems - Abstract
This paper proposes a primary and backup resource allocation model that provides a probabilistic protection guarantee for virtual machines against multiple failures of physical machines in a cloud provider to minimize the required total capacity. A physical machine allocates both primary and backup computing resources for virtual machines. When any failure occurs, the survived physical machines with preplanned backup resources recover the virtual machines on the failed physical machines and take over the workloads. The probability that the protection provided by a physical machine does not succeed is guaranteed within a given number. Providing the probabilistic protection can reduce the required backup capacity by allowing backup resource sharing, but it leads to a nonlinear programing problem in a general-capacity case against multiple failures. We apply robust optimization with extensive mathematical operations to formulate the primary and backup resource allocation problem as a mixed integer linear programming problem, where capacity fragmentation is suppressed. We prove the NP-hardness of considered problem. A heuristic is introduced to solve the optimization problem. The results reveal that the proposed model saves about one-third of the total capacity in our examined cases; it outperforms the conventional models in terms of both blocking probability and resource utilization.
- Published
- 2022
6. Shared Protection-Based Virtual Network Embedding Over Elastic Optical Networks
- Author
-
Eiji Oki and Fujun He
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
7. Resource Allocation Model Against Multiple Failures With Workload-Dependent Failure Probability
- Author
-
Mengfei Zhu, Fujun He, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
8. Service Chain Provisioning With Sub-Chain-Enabled Coordinated Protection to Satisfy Availability Requirements
- Author
-
Eiji Oki, Yuncan Zhang, and Fujun He
- Subjects
Service (systems architecture) ,Optimization problem ,Cost efficiency ,Computer Networks and Communications ,Computer science ,media_common.quotation_subject ,Distributed computing ,Provisioning ,Software deployment ,Component (UML) ,Scalability ,Electrical and Electronic Engineering ,Function (engineering) ,media_common - Abstract
This paper proposes a sub-chain-enabled coordinated protection model for the availability-guaranteed service function chain (SFC) provisioning, which considers the availability of each component to constitute an SFC, including links and VNFs. Unlike conventional protection models providing certain protection for the whole chain, the proposed model configures sub-chains for each SFC and provides proper protection for each sub-chain to achieve the required availability in a cost efficient way. We formulate the proposed model as an optimization problem to minimize the deployment cost. A game approach is presented to tackle the problem. The numerical results show that the proposed model outperforms the conventional ones in terms of deployment cost; the game approach has scalability of tackling the proposed model as the problem size increases.
- Published
- 2022
9. SDFA: A Service-Driven Fragmentation-Aware Resource Allocation in Elastic Optical Networks
- Author
-
Ao Yu, Jie Zhang, Qiuyan Yao, Bowen Bao, Eiji Oki, Bijoy Chand Chatterjee, and Hui Yang
- Subjects
Computer Networks and Communications ,Computer science ,Distributed computing ,Service (economics) ,media_common.quotation_subject ,Fragmentation (computing) ,Resource allocation ,Electrical and Electronic Engineering ,media_common - Published
- 2022
10. Optimization Model for Primary and Backup Resource Allocation With Workload-Dependent Failure Probability
- Author
-
Mengfei Zhu, Fujun He, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
11. Shared Backup Path Protection-Based Resource Allocation Considering Inter-Core and Inter-Mode Crosstalk for Spectrally-Spatially Elastic Optical Networks
- Author
-
Joy Halder, Mukulika Maity, Eiji Oki, and Bijoy Chand Chatterjee
- Subjects
Modeling and Simulation ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2022
12. Service Deployment Model on Shared Virtual Network Functions With Flow Partition
- Author
-
Jingxiong Zhang, Eiji Oki, and Fujun He
- Subjects
Network function virtualization ,service deployment ,Computer Networks and Communications ,flow partition ,queueing theory - Abstract
Network operators can operate services in a flexible way with virtual network functions thanks to the network function virtualization technology. Flow partition allows aggregated traffic to be split into multiple parts, which increases the flexibility. This paper proposes a service deployment model with flow partition to minimize the service deployment cost with meeting service delay requirements. A virtual network function of a service is allowed to have several instances, each of which hosts a part of flows and can be shared among different services, to reduce the initial and proportional cost. We provide the mathematical formulation for the proposed model and transform it to a special case as a mixed integer second-order cone programming (MISOCP) problem. A heuristic algorithm, which is called a flow partition heuristic (FPH), is introduced to solve the original problem in practical time by decomposing it into several steps; each step handles a convex problem. We compare the performances of proposed model with flow partition and conventional model without flow partition. We consider the formulated MISOCP problem with adopting a strategy of even splitting to divide flows in a special case, which is called an even spitting heuristic (ESH). The performances of FPH and ESH are compared in a realistic scenario. We also consider the formulated MISOCP problem as an original problem and compare it to an FPH-based heuristic algorithm with the even-splitting strategy (FPH-ES), in both realistic and synthetic scenarios. The numerical results reveal that the proposed model saves the service deployment cost compared to the conventional one. It improves the maximum admissible traffic scale by 23% in average in our examined cases. We observe that FPH outperforms ESH and ESH outperforms FPH-ES in terms of the service deployment cost in their own focused problems, respectively.
- Published
- 2022
13. Spatial-Importance-Based Computation Scheme for Real-Time Object Detection From 3D Sensor Data
- Author
-
Ryo Otsu, Ryoichi Shinkuma, Takehiro Sato, Eiji Oki, Daiki Hasegawa, and Toshikazu Furuya
- Subjects
Smart monitoring ,General Computer Science ,Monitoring ,Object detection ,General Engineering ,TK1-9971 ,Image edge detection ,Point cloud compression ,edge computing ,Three-dimensional displays ,Proposals ,General Materials Science ,Electrical engineering. Electronics. Nuclear engineering ,LIDAR sensor ,Real-time systems ,point cloud - Abstract
Three-dimensional (3D) sensor networks using multiple light-detection-and-ranging (LIDAR) sensors are good for smart monitoring of spots, such as intersections, with high potential risk of road-traffic accidents. The image sensors must share the strictly limited computation capacity of an edge computer. To have the computation speeds required from real-time applications, the system must have a short computation delay while maintaining the quality of the output, e.g., the accuracy of the object detection. This paper proposes a spatial-importance-based computation scheme that can be implemented on an edge computer of image-sensor networks composed of 3D sensors. The scheme considers regions where objects exist as more likely to be ones of higher spatial importance. It processes point-cloud data from each region according to the spatial importance of that region. By prioritizing regions with high spatial importance, it shortens the computation delay involved in the object detection. A point-cloud dataset obtained by a moving car equipped with a LIDAR unit was used to numerically evaluate the proposed scheme. The results indicate that the scheme shortens the delay in object detection.
- Published
- 2022
14. Virtual Network Function Allocation in Service Function Chains Using Backups With Availability Schedule
- Author
-
Eiji Oki, Rui Kang, and Fujun He
- Subjects
Service (business) ,Schedule ,Computer Networks and Communications ,Computer science ,business.industry ,Node (networking) ,Function (mathematics) ,Backup ,Key (cryptography) ,Electrical and Electronic Engineering ,business ,Virtual network ,Integer programming ,Computer network - Abstract
A suitable virtual network function (VNF) placement considering a node availability schedule extends service continuous serviceable time by suppressing service interruptions caused by function reallocation and node unavailabilities. However, function placement cannot avoid service interruptions caused by node unavailabilities. This paper proposes a primary and backup VNF placement model to avoid service interruptions caused by node unavailabilities by using backup functions. The considered backup functions have a period of startup time for preparation before they can be used and the number of them is limited. The proposed model is formulated as an integer linear programming problem to place the primary and backup VNFs based on the availability schedule at continuous time slots. We aim to maximize the minimum number of continuously available time slots in all service function chains (SFCs) over the deterministic availability schedule. We obverse that the proposed model considering the limited number of backup functions outperforms baseline models in terms of the minimum number of longest continuous available time slots in all SFCs. We introduce an algorithm to estimate the number of key unavailabilities at each time slot, which can find the unavailable nodes which are the bottlenecks to increase the service continuous available time at each time slot.
- Published
- 2021
15. Approximation Algorithms to Distributed Server Allocation With Preventive Start-Time Optimization Against Server Failure
- Author
-
Souhei Yanase, Eiji Oki, and Fujun He
- Subjects
Polynomial ,Mathematical optimization ,Server allocation ,Triangle inequality ,Computer science ,Server ,Approximation algorithm ,Start time ,Integer programming - Abstract
This letter proposes two polynomial-time approximation algorithms for the distributed server allocation problem with preventive start-time optimization against a failure. Server allocation is decided in advance to minimize the largest maximum delay among all failure patterns. We analyze the approximation performances of proposed algorithm when only the delay between servers or all the delays including that between servers and users hold the triangle inequality. Numerical results reveal that the proposed algorithms are 3.0×103 to 1.9×107 times faster than the integer linear programming approach while increasing the largest maximum delay by 1.033 times in average.
- Published
- 2021
16. Scaling and Offloading Optimization in Pre-CORD and Post-CORD Multi-Access Edge Computing
- Author
-
Eiji Oki, Ying-Dar Lin, Yuan-Cheng Lai, and Widhi Yahya
- Subjects
Base station ,Cord ,Computer Networks and Communications ,business.industry ,Computer science ,Server ,Electrical and Electronic Engineering ,business ,Multi access ,Scaling ,Edge computing ,Computer network - Published
- 2021
17. Robust Optimization Model for Probabilistic Protection With Multiple Types of Resources
- Author
-
Mitsuki Ito, Eiji Oki, and Fujun He
- Subjects
Mathematical optimization ,Computer Networks and Communications ,Computer science ,Probabilistic logic ,Robust optimization ,Electrical and Electronic Engineering - Published
- 2021
18. Main and Secondary Controller Assignment With Optimal Priority Policy Against Multiple Failures
- Author
-
Fujun He and Eiji Oki
- Subjects
Set (abstract data type) ,Computer Networks and Communications ,Computer science ,Control theory ,Survivability ,Electrical and Electronic Engineering ,Latency (engineering) ,Software-defined networking ,Greedy algorithm ,Integer programming - Abstract
This paper proposes a master and slave controller assignment model against multiple controller failures in software defined networks considering latency between switches and controllers. The survivability guarantee of each switch is satisfied by assigning a set of controllers, where one of them works as the master controller to control the switch. Given assigned controllers, we introduce a policy-based approach to automatically specify the master controllers in each failure pattern, which leads to a lightweight configuration on a switch. We define the average-case expected latency, the worst-case expected latency, and the expected number of switches within a latency bound, as three objectives to be optimized in three different problems. We prove that a low latency first policy achieves the optimal objective for each considered problem. We formulate the proposed controller assignment model with different goals as three mixed integer linear programming problems. We prove the NP-completeness for all the three problems. A greedy algorithm with polynomial time complexity is developed; we show that it provides a 1/2-approximation for the case without the survivability guarantee constraint. The numerical results observe that the proposed model obtains the optimal objective value with the computation time about 102 times shorter than that of a baseline that introduces decision variables to determine the master controllers.
- Published
- 2021
19. Spatial Feature-Based Prioritization for Transmission of Point Cloud Data in 3D-Image Sensor Networks
- Author
-
Eiji Takahashi, Takehiro Sato, Dai Kanetomo, Ryoichi Shinkuma, Takanori Iwai, Masamichi Oka, Eiji Oki, Koichi Nihei, and Kozo Satoda
- Subjects
Heuristic (computer science) ,Computer science ,Point cloud ,Feature selection ,Missing data ,computer.software_genre ,Transmission (telecommunications) ,Feature (computer vision) ,Bandwidth (computing) ,Data mining ,Electrical and Electronic Engineering ,Instrumentation ,Wireless sensor network ,computer - Abstract
A sensor network that connects multiple 3D image sensor devices using light detection and ranging (LIDAR) is effective for collecting spatial information to detect potential risks associated with people’s movements. However, due to the large volume and high frame rate of point cloud data obtained by LIDAR devices, under the condition of strictly limited bandwidth, some part of the data can go missing when detection is performed from point cloud data, causing a degradation of the accuracy. As the communication traffic volume of other applications increases, the communication available bandwidth for spatial information applications becomes insufficient. This paper proposes a scheme of spatial feature-based prioritization for the transmission of point cloud data with strictly limited communication bandwidth. It estimates how much each piece of point cloud data contributes to the accuracy improvement of a machine learning task for spatial prediction and assigns a higher score to those with a larger contribution. These scores prioritize the control for the transmission of point cloud data from sensor devices to an edge server. Feature selection with machine learning is helpful for estimating data importance. To improve the estimation accuracy, the proposed scheme complements missing entries when performing feature selection. We developed an indoor experimental environment and found that the prediction accuracy was sufficiently maintained compared to two benchmark schemes, random and heuristic, with limited communication bandwidth. Comparison with the perturbation method, which is a conventional method of feature selection, demonstrated the effectiveness to complement missing values in the proposed scheme.
- Published
- 2021
20. Robust Virtual Network Function Allocation in Service Function Chains With Uncertain Availability Schedule
- Author
-
Fujun He, Eiji Oki, and Rui Kang
- Subjects
Schedule ,Mathematical optimization ,Computer Networks and Communications ,Robustness (computer science) ,Computer science ,Node (networking) ,Robust optimization ,Electrical and Electronic Engineering ,Unavailability ,Virtual network ,Maintenance engineering ,Integer programming - Abstract
The availability schedule provides information on whether each network node is available at each time slot. The service interruptions caused by node unavailability marked in availability schedule can be suppressed if the functions are allocated according to the availability schedule. However, the given availability schedule may have gaps with the actual one and influence the VNF allocation. This paper proposes a robust optimization model to allocate virtual network functions (VNFs) in service function chains (SFCs) for time slots in sequence aiming to maximize the continuous available time of SFCs in a network with uncertain availability schedules by suppressing the interruptions caused by node unavailability marked in availability schedule and function reallocation. We formulate the problem as a mixed integer linear programming (MILP) problem over the given uncertainty set of the start time slot and period of unavailability on each node in the availability schedule. For solving the model in a practical time in a relative large size of network, we develop a heuristic algorithm. The numerical results show that the proposed model outperforms the baseline models under different levels of robustness in terms of the worst-case minimum number of the longest continuous available time slot in each SFC. The heuristic algorithm reduces the computation time with limited performance loss compared with the MILP approach. In the discussion, we introduce a constraint condition for the maintenance ability, which reduces the size of uncertainty set, and an extension for supporting more than one unavailability periods in the availability schedule on each node.
- Published
- 2021
21. Guest Editors’ Introduction: Special Section on Design and Management of Reliable Communication Networks
- Author
-
Teresa Gomes, Chadi Assi, Massimo Tornatore, Eiji Oki, Carmen Mas-Machuca, and Sara Ayoubi
- Subjects
wireless networks ,Service (systems architecture) ,Resilience ,Computer Networks and Communications ,business.industry ,Computer science ,Reliability (computer networking) ,network design ,Reconfigurability ,optical networks ,critical services ,network management ,Telecommunications network ,Replication (computing) ,NFV ,Wireless ,Electrical and Electronic Engineering ,Software-defined networking ,business ,Telecommunications ,Edge computing - Abstract
This special section features the latest research contributions regarding the design and management of reliable networks. Reliability of communication infrastructure is a top priority for network operators. To ensure reliable network operation, new design and management techniques for reliable communications must be constantly devised to respond to the rapid network and service evolution. As a recent and relevant example, deployments of 5G communication networks will soon enter their second phase, during which the network infrastructure will require upgrades to support new Ultra-Reliable Low-Latency Communication (URLLC) services with availabilities of up to 6 nines to be guaranteed jointly with extremely low latencies. Even in the still preliminary vision of 6G communication networks, reliability is posed as one of the most critical requirements, as 6G networks will represent the communication platform of our future hyper-connected society, supporting essential services as smart mobility, e-health, and immersive environments with application in remote education and working, just to name a few. Similarly, disaster resiliency in communication networks is now attracting the attention of media, government and industry as never before (consider, e.g., the worldwide network traffic deluge to support remote working during the current Coronavirus pandemic). Luckily, several new technical directions can be leveraged to provide new solutions for network reliability as: increased network reconfigurability enabled by Software Defined Networking (SDN); integration/convergence of multiple technologies (optical, wireless satellite, datacenter networks); enhanced forms of data/service replication, supported by, e.g., edge computing; network slicing, used to carve highly-reliable logical partitions of network, computing and storage resources. These, and many others, technological transformations can be leveraged to enable next-generation high-reliability networks.
- Published
- 2021
22. Proactive Fragmentation Management Scheme Based on Crosstalk-Avoided Batch Processing for Spectrally-Spatially Elastic Optical Networks
- Author
-
Bijoy Chand Chatterjee, Abdul Wadud, and Eiji Oki
- Subjects
Computer Networks and Communications ,Computer science ,Heuristic (computer science) ,Distributed computing ,Bandwidth (signal processing) ,Batch processing ,Benchmark (computing) ,Resource management ,Electrical and Electronic Engineering ,Routing (electronic design automation) ,Integer programming ,Market fragmentation - Abstract
Fragmentation with crosstalks is the major obstacle in spectrally-spatially elastic optical networks, which suppresses resource utilization while degrading the quality-of-transmission. To overcome this issue, this article proposes, for the first time, a proactive fragmentation management scheme based on batch processing while satisfying both inter-core and inter-mode crosstalks to enhance resource utilization. The proposed scheme adopts a batch processing method to create batches of lightpath requests received within a time threshold to utilize spectrum resources effectively. In batch processing, lightpath requests are prioritized based on the number of links in their routes and required slots. To maintain fairness in batch processing, when any request is rejected, the proposed scheme triggers a procedure that gives an equal opportunity to all arriving requests within the threshold, irrespective of numbers of hops and requested capacities, for allocation. We formulate the static batch processing of lightpath requests (SBPLR) as an integer linear programming (ILP) problem. We prove that SBPLR is an NP-Complete problem. We introduce a heuristic solution when ILP is intractable. To serve lightpath requests in each batch while avoiding inter-core and inter-mode crosstalks, we develop a core-mode-spectrum allocation algorithm. We present a dynamic batch processing based fragmentation management approach. Numerical results indicate that the proposed scheme outperforms the benchmark schemes.
- Published
- 2021
23. Backup Allocation Model With Probabilistic Protection for Virtual Networks Against Multiple Facility Node Failures
- Author
-
Eiji Oki and Fujun He
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Probabilistic logic ,Virtualization ,computer.software_genre ,Shared resource ,Capacity planning ,Backup ,Data_FILES ,Resource allocation ,Resource management ,Electrical and Electronic Engineering ,business ,computer ,Computer network - Abstract
This paper proposes a backup computing and transmission resource allocation model for virtual networks with providing a probabilistic protection against multiple facility node failures. The proposed model aims to find the allocation to minimize the required backup computing capacity, which guarantees the probability that the protection fails due to insufficient reserved backup computing capacity within a given value. The previous study only considers the backup computing resource allocation for virtual nodes regardless of the network aspects. In this paper, backup transmission resource allocation is incorporated, where the required backup transmission capacity can affect the required backup computing capacity. We analyze backup transmission resource sharing in the case of multiple facility node failures to compute the minimum required backup transmission capacity. A heuristic algorithm is introduced to solve the problem; especially, several techniques based on graph theory are developed to handle the problem with full backup transmission resource sharing. The result observes that the proposed model outperforms a baseline with dedicated protection for computing resource. Furthermore, the application scenarios for the proposed model with different degrees of backup transmission resource sharing are analyzed. With our analyses, a network operator can set an appropriate degree of backup transmission resource sharing based on practical requirements.
- Published
- 2021
24. Optimization Model for Multiple Backup Resource Allocation With Workload-Dependent Failure Probability
- Author
-
Eiji Oki, Mengfei Zhu, and Fujun He
- Subjects
Mathematical optimization ,Optimization problem ,Computer Networks and Communications ,Computer science ,Backup ,Server ,Resource allocation ,Workload ,Resource management ,Electrical and Electronic Engineering ,Integer programming ,Upper and lower bounds - Abstract
This paper proposes a multiple backup resource allocation model with a workload-dependent failure probability to minimize the maximum expected unavailable time (MEUT) under a protection priority policy. The workload-dependent failure probability is a non-decreasing function which reveals the relationship between the workload and the failure probability. The proposed model adopts hot backup and cold backup strategies to provide protection. For protection of each function with multiple backup resources, it is required to adopt a suitable priority policy to determine the expected unavailable time. We analyze the superiority of the protection priority policy for multiple backup resources in the proposed model; we provide the theorems that clarify the influence of policies on MEUT. We formulate the optimization problem as a mixed integer linear programming (MILP) problem. We provide a lower bound of the optimal objective value in the proposed model. We prove that the decision version of the multiple resource allocation problem in the proposed model is NP-complete. A heuristic algorithm inspired by the water-filling algorithm is developed with providing an upper bound of the expected unavailable time obtained by the algorithm. The numerical results show that the proposed model reduces MEUT compared to baselines. The priority policy adopted in the proposed model suppresses MEUT compared with other priority policies. The developed heuristic algorithm is approximately 106 times faster than the MILP approach with 10−4 performance penalty on MEUT.
- Published
- 2021
25. Priority-Based Inter-Core and Inter-Mode Crosstalk-Avoided Resource Allocation for Spectrally-Spatially Elastic Optical Networks
- Author
-
Abdul Wadud, Imran Ahmed, Bijoy Chand Chatterjee, and Eiji Oki
- Subjects
Mathematical optimization ,Optimization problem ,Computer Networks and Communications ,Computer science ,Blocking (statistics) ,Computer Science Applications ,Frequency allocation ,Core (game theory) ,Resource allocation ,Resource management ,Electrical and Electronic Engineering ,Routing (electronic design automation) ,Integer programming ,Software - Abstract
Spectrally-spatially elastic optical networks (SS-EONs) have been considered nowadays to overcome the physical barrier and enhance the transport capacity, where enhancing spectrum utilization while satisfying inter-core and inter-mode crosstalks is always challenging. This paper proposes a priority-based crosstalk-avoided core, mode and spectrum allocation scheme in SS-EONs, which enhances resource utilization while satisfying both constraints of inter-core crosstalk and inter-mode crosstalk. The proposed scheme creates different groups of cores and modes and assigns a priority to each of them. Core and mode are selected for serving lightpath requests based on their priority order. We define an optimization problem for routing, modulation assignment, spectrum, core, and mode allocation (RA-SCMA) in SS-EONs considering both constraints of inter-core crosstalk and inter-mode crosstalk simultaneously. The optimization problem is formulated as an integer linear programming problem. We prove that the decision version of RA-SCMA is NP-complete. We present crosstalk-avoided core-mode-spectrum allocation considering a dynamic scenario. Numerical results indicate that the proposed scheme reduces the blocking probability in SS-EONs and allows up to 40% increased traffic loads by utilizing the crosstalk-avoided unutilized slots compared to the conventional scheme that adopts a core-mode-spectrum first fit policy.
- Published
- 2021
26. Preventive Start-Time Optimization to Determine Link Weights Against Probabilistic Link Failures
- Author
-
Yuki Hirano, Eiji Oki, Takehiro Sato, and Fujun He
- Subjects
Mathematical optimization ,Computer Networks and Communications ,Computer science ,Quality of service ,Node (networking) ,Open Shortest Path First ,Probabilistic logic ,020206 networking & telecommunications ,02 engineering and technology ,Network topology ,Network congestion ,Network planning and design ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Link (knot theory) - Abstract
This article proposes a network design model to minimize the worst-case network congestion against multiple link failures, where open shortest path first link weights are determined at the beginning of network operation. In the proposed model, which is called the preventive start-time optimization model against multiple link failures (PSO-M), the number of multiple link failure patterns to support is restricted by introducing a probabilistic constraint called probabilistic guarantee . If the total probability of non-connected failure patterns does not exceed a specified probability, PSO-M provides a feasible solution of link weights. Otherwise, no feasible solution can be obtained. We introduce an extended model of PSO-M, called PSO-M with link reinforcement (PSO-MLR), where links are reinforced under a budget constraint. Link reinforcement in PSO-MLR has two purposes: maintaining network connectivity and reducing the worst-case congestion ratio. Numerical results show that PSO-M offers lower worst-case congestion ratios than the start-time optimization model, where link weights are obtained against the non-failure pattern assuming that multiple link failures are possible. The superiority of PSO-M strengthens as the average node degree of the network increases. Given a fixed budget, PSO-MLR allows the worst-case congestion ratio to be varied within a specific range. PSO-MLR can support a part of non-connected failure patterns to determine link weights, and so is a valuable enhancement of PSO-M.
- Published
- 2021
27. Virtual Network Function Allocation to Maximize Continuous Available Time of Service Function Chains With Availability Schedule
- Author
-
Takehiro Sato, Fujun He, Rui Kang, and Eiji Oki
- Subjects
Schedule ,Mathematical optimization ,Computer Networks and Communications ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Function (mathematics) ,computer.software_genre ,Maintenance engineering ,Virtual machine ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Routing (electronic design automation) ,Unavailability ,Virtual network ,computer ,Integer programming - Abstract
This paper proposes an optimization model to derive the virtual network function (VNF) allocation of time slots in sequence aiming to maximize the continuous available time of service function chains (SFCs) in a network. The proposed model suppresses service interruptions otherwise created by the unavailability of virtual machines (VMs) and the reallocation of VNFs. The proposed model computes VNF allocation in a series of time slots based on a VM availability schedule, which provides information on the availability of each VM in each time slot. We formulate the proposed model as an integer linear programming (ILP) problem with the goal of maximizing the minimum number of longest continuous available time slots in each SFC. We prove that the decision version of the VNF allocation problem (VNFA) is NP-complete. As the size of ILP problem increases, the problem is difficult to solve in a practical time. We develop a heuristic algorithm to solve the VNFA problem. Numerical results show that the proposed model improves the continuous available time of SFCs compared with existing models, which partially consider VM unavailability or VNF reallocation. We observe that the proposed model together with a consideration of routing reduces the path length of requests. The developed heuristic algorithm is faster than the ILP approach with a limited performance penalty.
- Published
- 2021
28. Unavailability-Aware Shared Virtual Backup Allocation for Middleboxes: A Queueing Approach
- Author
-
Fujun He and Eiji Oki
- Subjects
Queueing theory ,Computer Networks and Communications ,business.industry ,Computer science ,Middlebox ,020206 networking & telecommunications ,02 engineering and technology ,Network management ,Backup ,Server ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Resource management ,Electrical and Electronic Engineering ,Unavailability ,business ,Heuristics ,Computer network - Abstract
Network function virtualization provides an efficient and flexible way to implement network functions deployed in middleboxes as software running on commodity servers. However, it brings challenges for network management, one of which is how to manage the unavailability of middleboxes. This article proposes an unavailability-aware backup allocation model with the shared protection to minimize the maximum unavailability among functions. The shared protection allows multiple functions to share the backup resources, which leads to a complicated recovery mechanism and makes unavailability estimation difficult. We develop an analytical approach based on the queueing theory to compute the middlebox unavailability for a given backup allocation. The heterogeneous failure, repair, recovery, and waiting procedures of functions and backup servers, which lead to several different states for each function and for the whole system, are considered in the queueing approach. We analyze the performance bounds for a given solution and for the optimal objective value. Based on the developed analytical approach and the performance bounds, we introduce two heuristics to solve the backup allocation problem. The results reveal that, compared to a baseline model, the proposed unavailability-aware model reduces the maximum unavailability 16% in average in our examined scenarios.
- Published
- 2021
29. Data-Importance-Aware Bandwidth-Allocation Scheme for Point-Cloud Transmission in Multiple LIDAR Sensors
- Author
-
Takehiro Sato, Ryoichi Shinkuma, Eiji Oki, and Ryo Otsu
- Subjects
Sensor systems ,Smart monitoring ,point cloud compression ,General Computer Science ,bandwidth allocation ,Computer science ,Real-time computing ,Point cloud ,Capacitive sensors ,Servers ,Intelligent sensor ,Server ,General Materials Science ,LIDAR sensor ,Intersection (set theory) ,Sensors ,General Engineering ,Ranging ,TK1-9971 ,Laser radar ,Bandwidth allocation ,Lidar ,Transmission (telecommunications) ,Three-dimensional displays ,Intelligent sensors ,Electrical engineering. Electronics. Nuclear engineering - Abstract
This paper addresses bandwidth allocation to multiple light detection and ranging (LIDAR) sensors for smart monitoring, which a limited communication capacity is available to transmit a large volume of point-cloud data from the sensors to an edge server in real time. To deal with the limited capacity of the communication channel, we propose a bandwidth-allocation scheme that assigns multiple point-cloud compression formats to each LIDAR sensor in accordance with the spatial importance of the point-cloud data transmitted by the sensor. Spatial importance is determined by estimating how objects, such as cars, trucks, bikes, and pedestrians, are likely to exist since regions where objects are more likely to exist are more useful for smart monitoring. A numerical study using a real point-cloud dataset obtained at an intersection indicates that the proposed scheme is superior to the benchmarks in terms of the distributions of data volumes among LIDAR sensors and quality of point-cloud data received by the edge server.
- Published
- 2021
30. Backup Network Design Against Multiple Link Failures to Avoid Link Capacity Overestimation
- Author
-
Fujun He, Yuki Hirano, Takehiro Sato, and Eiji Oki
- Subjects
Mathematical optimization ,Computer Networks and Communications ,Heuristic (computer science) ,Computer science ,Probabilistic logic ,Robust optimization ,020206 networking & telecommunications ,02 engineering and technology ,Network planning and design ,Capacity planning ,Backup ,Simulated annealing ,Data_FILES ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Computer Science::Operating Systems ,Integer programming ,Computer Science::Distributed, Parallel, and Cluster Computing - Abstract
This paper proposes a backup network design scheme that can determine backup link capacity in practical time. The proposed scheme suppresses the required backup link capacity while providing a guaranteed level of recovery against multiple independent link failures. The conventional scheme is based on robust optimization and suffers from the problem of overestimating the backup link capacity. The proposed scheme addresses the overestimation problem by computing the probabilistic distribution function of required backup link capacity in polynomial time. We formulate the backup network design problem with the proposed scheme as a mixed integer linear programming problem to minimize the total required backup link capacity. We prove that the decision version of backup network design problem is NP-complete. Given that network size will continue to increase, we introduce a heuristic approach of simulated annealing to solve the same problem. Numerical results show that the proposed scheme requires less total backup link capacity than the conventional scheme based on robust optimization.
- Published
- 2020
31. Experiment and Availability Analytical Model of Cloud Computing System Based on Backup Resource Sharing and Probabilistic Protection Guarantee
- Author
-
Takashi Kurimoto, Fujun He, Takehiro Sato, Shigeo Urushidani, and Eiji Oki
- Subjects
business.industry ,Computer science ,Distributed computing ,cloud computing ,Probabilistic logic ,Availability ,Cloud computing ,probabilistic protection guarantee, shared protection ,lcsh:HE1-9990 ,lcsh:Telecommunication ,Shared resource ,failure recovery ,Backup ,lcsh:TK5101-6720 ,Data_FILES ,lcsh:Transportation and communications ,business - Abstract
A probabilistic protection guarantee enables a cloud provider to improve the availability of their cloud computing system in a cost-efficient manner. A backup resource allocation strategy based on the probabilistic protection guarantee reduces the total amount of required backup computation resources by allowing multiple virtual machines to share the same backup computation resources of a physical machine. There have been no experimental studies that investigate the impact of applying the probabilistic protection guarantee to a cloud computing framework in real use. This paper presents an experiment of failure recovery on the cloud computing system in which the backup computation resources are shared by multiple virtual machines. We implement a prototype cloud system by using the OpenStack framework to demonstrate the failure recovery scenario according to the backup resource allocation strategy. We develop an availability analytical model for the backup resource allocation strategy. Based on the analytical model, we present case studies which derive the availability of cloud system by using the measurement results of the experiment.
- Published
- 2020
32. Backup Resource Allocation of Virtual Machines with Two-Stage Probabilistic Protection
- Author
-
Kento Yokouchi, Fujun He, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2023
33. Service Deployment with Priority Queueing for Traffic Processing and Transmission in Network Function Virtualization
- Author
-
Fujun He and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2023
34. Multi-Agent Deep Reinforcement Learning for Cooperative Computing Offloading and Route Optimization in Multi Cloud-Edge Networks
- Author
-
Akito Suzuki, Masahiro Kobayashi, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2023
35. Service Chain Provisioning Model Considering Traffic Changes Due to Virtualized Network Functions
- Author
-
Shintaro Ozaki, Takehiro Sato, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2023
36. Design of Twisted and Folded Clos Network with Guaranteeing Admissible Blocking Probability
- Author
-
Haruto Taka, Takeru Inoue, and Eiji Oki
- Subjects
General Medicine - Published
- 2023
37. Optimization Model for Backup Resource Allocation in Middleboxes With Importance
- Author
-
Takehiro Sato, Fujun He, and Eiji Oki
- Subjects
Mathematical optimization ,Computer Networks and Communications ,Computer science ,Heuristic (computer science) ,Middlebox ,Approximation algorithm ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Backup ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Resource allocation ,Resource management ,Electrical and Electronic Engineering ,Unavailability ,Integer programming ,Software - Abstract
Network function virtualization paradigm enables us to implement network functions provided in middleboxes as softwares that run on commodity servers. This paper proposes a backup resource allocation model for middleboxes with considering both failure probabilities of network functions and backup servers. A backup server can protect several functions; a function can have multiple backup servers. We take the importance of functions into account by defining a weighted unavailability for each function. We aim to find an assignment of backup servers to functions, where the worst weighted unavailability is minimized. We formulate the proposed backup resource allocation model as a mixed integer linear programming problem. We prove that the backup resource allocation problem for middlebox with importance is NP-complete. We develop three heuristic algorithms with polynomial time complexity to solve the problem. We analyze the approximation performances of different heuristic algorithms with providing several lower and upper bounds. We present the competitive evaluation in terms of deviation and computation time among the results obtained by running the heuristic algorithms and by solving the mixed integer linear programming problem. The results show the pros and cons of different approaches. With our analyses, a network operator can choose an appropriate approach according to the requirements in specific application scenarios.
- Published
- 2019
38. A Span Power Management Scheme for Rapid Lightpath Provisioning and Releasing in Multi-Core Fiber Networks
- Author
-
Naoaki Yamanaka, Bijoy Chand Chatterjee, Eiji Oki, Andrea Fumagalli, and Fujun He
- Subjects
Power management ,Optical amplifier ,Computer Networks and Communications ,Computer science ,Amplifier ,020206 networking & telecommunications ,Provisioning ,02 engineering and technology ,Span (engineering) ,Blocking (statistics) ,Computer Science Applications ,Power (physics) ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,Range (statistics) ,Electrical and Electronic Engineering ,Software - Abstract
The lightpath provisioning time or releasing time is adversely affected by the time that optical amplifiers require to adjust to a newly added or terminated signal power. This shortcoming is particularly true with multi-core erbium-doped amplifiers (EDFAs), as multi-core transient-suppressed EDFAs are unavailable at the current time. This paper proposes a fiber span power management scheme based on dummy wavelength signals that are used to shorten the lightpath provisioning and releasing times in multi-core fiber networks. With the shorter time of lightpath provisioning and releasing procedures, the total time that is required to reserve wavelengths in the system is decreased, which means that network resources are used more efficiently. As a result, the blocking performance and average waiting time in the system are improved. To evaluate the performance of the proposed scheme, this paper introduces both analytical model and simulation study. In the introduced model, the ratio of the number of activating and activated dummy wavelengths to the number of dummy wavelengths in each span is considered in the range between 0 and 1. The analysis reveals that the performance of the proposed scheme depends on $\alpha $ , which is the ratio of the number of dummy wavelengths to the number of dummy and lightpath wavelengths in each span, and there exists a point of $\alpha $ where the blocking probability becomes minimum. We further observe that the proposed scheme outperforms the conventional approaches in terms of blocking probability and average waiting time, as traffic loads increase. Finally, we provide the direction on how our introduced model can be considered for a network with multi-span routes.
- Published
- 2019
39. Carrier-Scale Packet Processing Architecture Using Interleaved 3D-Stacked DRAM and Its Analysis
- Author
-
Akio Kawabata, Eiji Oki, Fujun He, and Tomohiro Korikawa
- Subjects
General Computer Science ,Interleaving ,Computer science ,Packet processing ,02 engineering and technology ,Communication systems ,law.invention ,law ,Server ,Internet Protocol ,Next-generation network ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,performance analysis ,Edge computing ,Dynamic random-access memory ,queueing analysis ,business.industry ,General Engineering ,020206 networking & telecommunications ,memory architecture ,020202 computer hardware & architecture ,Scalability ,network function virtualization ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 ,Dram ,Computer network - Abstract
New network services such as the Internet of Things and edge computing are accelerating the increase in traffic volume, the number of connected devices, and the diversity of communication. Next generation carrier network infrastructure should be much more scalable and adaptive to rapid increase and divergence in network demand with much lower cost. A more virtualization-aware, flexible and inexpensive system based on general-purpose hardware is necessary to transform the traditional carrier network into a more adaptive, next generation network. In this paper, we propose an architecture for carrier-scale packet processing that is based on interleaved 3 dimensional (3D)-stacked dynamic random access memory (DRAM) devices. The proposed architecture enhances memory access concurrency by leveraging vault-level parallelism and bank interleaving of 3D-stacked DRAM. The proposed architecture uses the hash-function-based distribution of memory requests to each set of vault and bank; a significant portion of the full carrier-scale tables. We introduce an analytical model of the proposed architecture for two traffic patterns; one with random memory request arrivals and one with bursty arrivals. By using the model, we calculate the performance of a typical Internet protocol routing application as a benchmark of carrier-scale packet processing wherein main memory accesses are inevitable. The evaluation shows that the proposed architecture achieves around 80 Gbps for carrier-scale packet processing involving both random and bursty request arrivals.
- Published
- 2019
40. Participating-Domain Segmentation Based Delay-Sensitive Distributed Server Selection Scheme
- Author
-
Eiji Oki, Akio Kawabata, and Bijoy Chand Chatterjee
- Subjects
Scheme (programming language) ,General Computer Science ,Computer science ,Real-time application ,General Engineering ,and dividing users’ participation domain ,Domain (software engineering) ,edge computing ,distributed processing ,General Materials Science ,Segmentation ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Delay sensitive ,lcsh:TK1-9971 ,computer ,Algorithm ,Selection (genetic algorithm) ,computer.programming_language - Abstract
This paper proposes a participating-domain segmentation based server selection scheme in a delay-sensitive distributed communication approach to reducing the computational time for solving the server selection problem. The proposed scheme divides the users' participation domain into a number of regions. The delay between a region and a server is a function of locations of the region and the server. The length between the region and the server is considered based on conservative approximation. The location of the region is determined regardless of the number of users and their participation location. The proposed scheme includes two phases. The first phase uses the server finding process and determines the number of users that are accommodated from each region by each server, instead of actual server selection, to reduce the computational complexity. The second phase uses the delay improvement process and determines the overall delay and the selected server for each user. We formulate an integer linear programming problem for the server selection in the proposed scheme and evaluate the performance in terms of computation time and delay. The numerical results indicate that the computational time using the proposed scheme is smaller than that of the conventional scheme, and the effectiveness of the proposed scheme enhances as the number of users increases.
- Published
- 2019
41. Virtual Network Function Placement for Service Chaining by Relaxing Visit Order and Non-Loop Constraints
- Author
-
Ryoichi Shinkuma, Eiji Oki, Naoki Hyodo, and Takehiro Sato
- Subjects
Optimization ,General Computer Science ,Computer science ,Distributed computing ,050801 communication & media studies ,02 engineering and technology ,Network topology ,integer linear programming ,0508 media and communications ,Software ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Heuristic algorithms ,virtual network function ,General Materials Science ,service chaining ,Integer programming ,Virtual network ,business.industry ,05 social sciences ,General Engineering ,020206 networking & telecommunications ,Chaining ,Path (graph theory) ,network function virtualization ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Relaxation (approximation) ,business ,lcsh:TK1-9971 - Abstract
Network Function Virtualization (NFV) is a paradigm that virtualizes traditional network functions and instantiates Virtual Network Functions (VNFs) as software instances separate from hardware appliances. Service Chaining (SC), seen as one of the major NFV use cases, provides customized services to users by concatenating VNFs. A VNF placement model for SC that relaxes the visit order constraints of requested VNFs has been considered. Relaxing the VNF visit order constraints reduces the number of VNFs which need to be placed in the network. However, since the model does not permit any loop within an SC path, the efficiency of utilization of computation resources deteriorates in some topologies. This paper proposes a VNF placement model for SC which minimizes the cost for placing VNFs and utilizing link capacity while allowing both relaxation of VNF visit order constraints and configuration of SC paths including loops. The proposed model determines routes of requested SC paths, which can have loops, by introducing a logical layered network generated from an original physical network. This model is formulated as an Integer Linear Programming (ILP) problem. A heuristic algorithm is introduced for the case that the ILP problem is not tractable. Simulation results show that the proposed model provides SC paths with smaller cost compared to the conventional model.
- Published
- 2019
42. Optimization Model for Designing Multiple Virtualized Campus Area Networks Coordinating With Wide Area Networks
- Author
-
Shigeo Urushidani, Takashi Kurimoto, and Eiji Oki
- Subjects
Computer Networks and Communications ,Network security ,business.industry ,Computer science ,Distributed computing ,Reliability (computer networking) ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,020210 optoelectronics & photonics ,Wide area network ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Data synchronization ,Electrical and Electronic Engineering ,business ,Software-defined networking ,Integer programming - Abstract
We propose an optimization model for designing multiple network functions virtualization (NFV)-based campus area networks (CANs). Organizations, such as universities and research institutions have their own campus information and communication technology equipment, but many would like to move this equipment to NFV and cloud data centers for improving reliability and resiliency. However, NFV-based CAN is not affordable for them, because costs are higher with a cloud. One solution is for multiple organizations to procure NFV and cloud data center resources together. By doing so, their individual costs of using these resources will be reduced. To make progress on this approach, there are planning issues to resolve when choosing optimal NFV and cloud data center locations. The proposed model minimizes the total network costs incurred by the organizations, including the wide area network cost and data synchronization costs for recovery from faults at data centers and the various subcampus network configurations of legacy CANs. The model is formulated and analyzed by using mixed integer linear programming. The effect of cost minimization is evaluated in a ladder network and an actual network, SINET5, and it is found that the costs can be reduced by up to 63%. The calculation times of this model under practical conditions are short and the model will be useful in practice. It is also shown that the cost of fault recovery can be suppressed. These results will encourage organizations to deploy NFV-based CANs.
- Published
- 2018
43. Virtualized Network Graph Design and Embedding Model to Minimize Provisioning Cost
- Author
-
Takehiro Sato, Takashi Kurimoto, Shigeo Urushidani, and Eiji Oki
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
44. Heuristic Approach to Determining Cache Node Locations in Content-Centric Networks
- Author
-
Nattapong Kitsuwan, Hiroki Tahara, and Eiji Oki
- Subjects
Optimization problem ,General Computer Science ,Heuristic (computer science) ,Computer science ,business.industry ,Node (networking) ,05 social sciences ,General Engineering ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,0508 media and communications ,Shortest path problem ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Cache ,content distribution networks ,Routing (electronic design automation) ,business ,lcsh:TK1-9971 ,Integer programming ,Computer networks ,Computer network - Abstract
Internet users consider content information to be useful, but the current Internet approach treats location information as more important as so ties the former to the latter. A Content-Centric Network (CCN) allows the user to obtain content without regard to its location. CCN caches the contents information at its intermediate nodes. The content is searched along the shortest path between the user and the node that has the original content. If any cache node is located on the shortest path, the content can be obtained from the nearest cache node, so far fewer hops are needed compared to the network without any cache node. However, this efficiency is not achieved if no cache node is located on the shortest path. One proposal sets cache nodes that broadcast their contents to surrounding nodes; the user is able to obtain the content from the cache node, rather than the node that has the original content, if the cache node is closer to the user. The location of the cache node affects the number of hops. We formulate an optimization problem that determines the locations of cache nodes to minimize the hop count as an integer linear programming (ILP) problem. Since large ILPs cannot be solved in practical time, we introduce a betweenness centrality (BC) approach that determines the location of cache nodes by computing the BC value of each node and ranks the nodes in descending BC order. The BC value denotes the ratio of the number of shortest paths between source-receiver pairs passing through the node to the total number of shortest paths between source-receiver pairs. Simulations show that the BC approach offers drastically reduced computation time, while the average number of hops is just 5.8% higher than that determined with the ILP approach.
- Published
- 2018
45. Cyber-Physical-Social Aware Privacy Preserving in Location-Based Service
- Author
-
Konglin Zhu, Lin Zhang, Wenke Yan, Eiji Oki, Liyang Chen, and Wenqi Zhao
- Subjects
Information privacy ,General Computer Science ,Computer science ,02 engineering and technology ,Computer security ,computer.software_genre ,symbols.namesake ,020204 information systems ,Server ,0202 electrical engineering, electronic engineering, information engineering ,CPS-aware privacy utility maximization ,privacy leakage ,General Materials Science ,Authentication ,General Engineering ,Cyber-physical system ,020206 networking & telecommunications ,location-based service ,Interpersonal ties ,Nash equilibrium ,Location-based service ,symbols ,Pairwise comparison ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 ,computer - Abstract
The privacy leakage resulting from location-based service (LBS) has become a critical issue. To preserve user privacy, many previous studies have investigated to prevent LBS servers from user privacy theft. However, they only consider whether the peers are innocent or malicious but ignore the relationship between the peers, whereas such a relationship between each pairwise of users affects the privacy leakage tremendously. For instance, a user has less concern of privacy leakage from a social friend than a stranger. In this paper, we study cyber-physical-social (CPS) aware method to address the privacy preserving in the case that not only LBS servers but also every other participant in the network has the probability to be malicious. Furthermore, by exploring the physical coupling and social ties among users, we construct CPS-aware privacy utility maximization (CPUM) game. We then study the potential Nash equilibrium of the game and show the existence of Nash equilibrium of CPUM game. Finally, we design a CPS-aware algorithm to find the Nash equilibrium for the maximization of privacy utility. Extensive evaluation results show that the proposed approach reduces privacy leakage by 50% in the case that malicious servers and users exist in the network.
- Published
- 2018
46. Fragmentation Problems and Management Approaches in Elastic Optical Networks: A Survey
- Author
-
Eiji Oki, Bijoy Chand Chatterjee, and Seydou Ba
- Subjects
Computer science ,Node (networking) ,Context (language use) ,02 engineering and technology ,Blocking (statistics) ,Market fragmentation ,Frequency allocation ,020210 optoelectronics & photonics ,Risk analysis (engineering) ,0202 electrical engineering, electronic engineering, information engineering ,Bandwidth (computing) ,Resource management ,Approaches of management ,Electrical and Electronic Engineering - Abstract
Bandwidth fragmentation, a serious issue in elastic optical networks (EONs), can be suppressed by proper management in order to enhance the accommodated traffic demands. In this context, we need an in-depth survey that covers bandwidth fragmentation problems and how to suppress them. This survey paper starts with the basic concept of EONs and their unique properties. This paper then moves to the fragmentation problem in EONs. We discuss and analyze the major conventional spectrum allocation policies in terms of the fragmentation effect in a network. The taxonomies of the fragmentation management approaches are presented along with different node architectures. Subsequently, this paper reviews state-of-the-art fragmentation management approaches. Next, we evaluate and analyze the major fragmentation management approaches in terms of blocking probability. Finally, we address the research challenges and open issues on fragmentation problem in EONs that should be addressed in future research.
- Published
- 2018
47. Defragmentation Scheme Based on Exchanging Primary and Backup Paths in 1+1 Path Protected Elastic Optical Networks
- Author
-
Seydou Ba, Eiji Oki, and Bijoy Chand Chatterjee
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Path protection ,Distributed computing ,Fragmentation (computing) ,02 engineering and technology ,Computer Science Applications ,020210 optoelectronics & photonics ,Backup ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Defragmentation ,business ,Software ,Computer network - Abstract
In elastic optical networks (EONs), a major obstacle to using the spectrum resources efficiently is the spectrum fragmentation. In the literature, several defragmentation approaches have been presented. For 1+1 path protection, conventional defragmentation approaches consider designated primary and backup paths. This exposes the spectrum to fragmentations induced by the primary lightpaths, which are not to be disturbed in order to achieve hitless defragmentation. This paper proposes a defragmentation scheme using path exchanging in 1+1 path protected EONs. We exchange the path function of the 1+1 protection with the primary toggling to the backup state, while the backup becomes the primary. This allows both lightpaths to be reallocated during the defragmentation process, while they work as backup, offering hitless defragmentation. Considering path exchanging, we define a static spectrum reallocation optimization problem that minimizes the spectrum fragmentation while limiting the number of path exchanging and reallocation operations. We then formulate the problem as an integer linear programming (ILP) problem. We prove that a decision version of the defined static reallocation problem is NP-complete. We present a spectrum defragmentation process for dynamic traffic, and introduce a heuristic algorithm for the case that the ILP problem is not tractable. The simulation results show that the proposed scheme outperforms the conventional one and improves the total admissible traffic up to 10%.
- Published
- 2017
48. Service Mapping and Scheduling with Uncertain Processing Time in Network Function Virtualization
- Author
-
Yuncan Zhang, Fujun He, and Eiji Oki
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Software ,Computer Science Applications ,Information Systems - Published
- 2021
49. Multiuser MIMO Transmission Aided by Massive One-Bit Magnitude Measurements
- Author
-
Shengchu Wang, Eiji Oki, Lin Zhang, Jing Wang, and Yunzhou Li
- Subjects
3G MIMO ,Coherence time ,Computer science ,Applied Mathematics ,05 social sciences ,MIMO ,Real-time computing ,Detector ,050801 communication & media studies ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Multi-user MIMO ,Multiplexing ,Computer Science Applications ,Spatial multiplexing ,0508 media and communications ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,Baseband ,Electrical and Electronic Engineering ,Computer Science::Information Theory ,Communication channel - Abstract
This paper proposes a multiuser MIMO system with both full measurements and one-bit magnitude observations, which are acquired by several linear inphase-and-quadrature (IQ) structured radio frequency (RF) chains and massive one-bit envelope chains, respectively. The total circuit power and cost are not increased significantly, since the added one-bit envelope chains have low power and low cost. Channel side information on the one-bit envelope chains is acquired by sharing the IQ-structured RF chains and executing a channel calibration operation. Two multiuser detectors are constructed based on the semidefinite relaxation (SDR) and approximate message passing (AMP). The one-bit magnitudes are interpreted as inequality constraints in the SDR detector, and exploited in a Bayesian manner by the AMP detector. Simulation results show that the one-bit magnitude measurements bring about high MIMO multiplexing and diversity gains, and decrease the transmission power. With the increase of the channel coherence time, more one-bit envelope chains are prone to be equipped, and one-bit magnitude-aided MIMO becomes more and more spectral-and-energy-efficient than the conventional MIMO.
- Published
- 2016
50. A Green and Robust Optimization Strategy for Energy Saving Against Traffic Uncertainty
- Author
-
Eiji Oki and Ihsen Aziz Ouédraogo
- Subjects
Mathematical optimization ,Computer Networks and Communications ,Computer science ,Quality of service ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,05 social sciences ,Robust optimization ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Energy consumption ,0508 media and communications ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Integer programming ,Traffic generation model - Abstract
This paper introduces a green and robust optimization scheme based on hose model with bound of link traffic (HLT), in order to achieve power savings in the networks with traffic uncertainty. Most of the studies on green communications nowadays are based on estimates of real traffic matrix. However predicting the traffic matrix is a difficult task for network operators. Further, these models may not be fully applicable in a context where the traffic often fluctuates. By using HLT, the knowledge of the exact traffic information is not required. The traffic is specified by the total outgoing and incoming amount at each node and the total traffic going through each link. We formulate the problem as a mixed integer linear programming (MILP) problem, with an objective to reduce the flow through each link and allow the links to be put to sleep mode. We develop a heuristic to mitigate the limitations of the MILP formulation. Simulation results show that green HLT, while being robust to traffic uncertainty, achieves power efficiency comparable to the models where the knowledge of the traffic information is required.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.