387 results on '"Sheng, Zhou"'
Search Results
2. Efficient Medical Image Segmentation Based on Knowledge Distillation
- Author
-
Sheng Zhou, Xin Shen, Zhe Liu, Jiajun Bu, Dian Qin, Hui-Fen Dai, Jing-Jun Gu, Zhi-Hua Wang, and Lei Wu
- Subjects
FOS: Computer and information sciences ,Computational complexity theory ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Inference ,computer.software_genre ,Convolutional neural network ,Software portability ,FOS: Electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Medical imaging ,Humans ,Segmentation ,Electrical and Electronic Engineering ,Structure (mathematical logic) ,Radiological and Ultrasound Technology ,Image and Video Processing (eess.IV) ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,Semantics ,Computer Science Applications ,Neural Networks, Computer ,Data mining ,computer ,Software - Abstract
Recent advances have been made in applying convolutional neural networks to achieve more precise prediction results for medical image segmentation problems. However, the success of existing methods has highly relied on huge computational complexity and massive storage, which is impractical in the real-world scenario. To deal with this problem, we propose an efficient architecture by distilling knowledge from well-trained medical image segmentation networks to train another lightweight network. This architecture empowers the lightweight network to get a significant improvement on segmentation capability while retaining its runtime efficiency. We further devise a novel distillation module tailored for medical image segmentation to transfer semantic region information from teacher to student network. It forces the student network to mimic the extent of difference of representations calculated from different tissue regions. This module avoids the ambiguous boundary problem encountered when dealing with medical imaging but instead encodes the internal information of each semantic region for transferring. Benefited from our module, the lightweight network could receive an improvement of up to 32.6% in our experiment while maintaining its portability in the inference phase. The entire structure has been verified on two widely accepted public CT datasets LiTS17 and KiTS19. We demonstrate that a lightweight network distilled by our method has non-negligible value in the scenario which requires relatively high operating speed and low storage usage., Accepted by IEEE TMI, Code Avalivable
- Published
- 2021
3. Greening 6G
- Author
-
Sheng Zhou, Zhisheng Niu, and Noel Crespi
- Subjects
New horizons ,Greening ,Computer science ,Earth science - Published
- 2021
4. Guest Editorial Special Issue on Age of Information and Data Semantics for Sensing, Communication, and Control Co-Design in IoT
- Author
-
Nikolaos Pappas, Luiz A. DaSilva, Zhiyuan Jiang, Anthony Ephremides, and Sheng Zhou
- Subjects
Information Age ,Computer Networks and Communications ,Computer science ,business.industry ,Wireless network ,media_common.quotation_subject ,Control (management) ,Automation ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Wireless ,Quality (business) ,business ,Internet of Things ,5G ,Information Systems ,media_common ,Computer network - Abstract
A typical Internet-of-Things (IoT) system consists of three major layers: 1) sensing; 2) communication; and 3) application (i.e., actuation and control) layers. The co-design of these layers has been studied for over two decades, dating back to the concept of communication, computing, and control, i.e., 3C, convergence in the 1990s. Nowadays, with the emergence of wireless-networked machine-type applications, such as connected autonomous driving and factory automation, this co-design is more urgently desired than ever to meet the stringent quality-of-service requirements thereof. To realize this goal, the 5G wireless network of today has mainly focused on the communication part and strived to reliably achieve low air-interface communication delay, i.e., ultra-reliable and low-latency communications (uRLLC). However, more and more wireless communications in IoT are based on status updates instead of general content delivery. The current uRLLC design is insufficient to characterize the status update quality, and thus is unable to optimize for timely status update with constrained wireless resources. Therefore, the performance of computing and control in IoT networks that rely highly on wireless communications is suboptimal.
- Published
- 2021
5. Coded Computation Over Heterogeneous Workers With Random Task Arrivals
- Author
-
Fan Zhang, Yuxuan Sun, and Sheng Zhou
- Subjects
Mathematical optimization ,Computer science ,Modeling and Simulation ,Server ,Convex optimization ,Task analysis ,Approximation algorithm ,Electrical and Electronic Engineering ,Online algorithm ,Assignment problem ,Computer Science Applications ,Task (project management) ,Scheduling (computing) - Abstract
Considering the scheduling and allocation of tasks among multiple servers, distributed machine learning faces the problem of the straggler effect as well as system heterogeneity, e.g., the computation time of the slowest worker can be much longer than that of the normal workers. This letter studies the distributed online tasks assignment problem under heterogeneous conditions where different workers have different computing capacities, in order to minimize the task completion time. We consider the task scheduling with random task arrivals, and introduce task cancellation after completion scheme to clear the unfinished parts after the completion of the task to further reduce redundant calculations. To address the challenge of finding the optimal solution, we propose an approximate online algorithm based on convex optimization and time recursion. Simulation results show that the proposed algorithm can reduce the completion delay by over 30% as compared with the one-shot counterpart, and maintain a relatively stable delay in the case of fluctuating arrival rates.
- Published
- 2021
6. Profit maximization for competitive social advertising
- Author
-
Jiawei Chen, Chun Chen, Yan Feng, Sheng Zhou, Yanhao Huang, Can Wang, Deshi Ye, and Qihao Shi
- Subjects
General Computer Science ,Computer science ,Profit maximization ,Social platform ,Advertising ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Effective algorithm ,Set (abstract data type) ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Host (network) - Abstract
In social advertising, the social platform host may run marketing campaigns for multiple competing clients simultaneously. In this case, each client comes up with a budget and an influence spread requirement. The host runs campaigns by allocating a set of seed nodes for each client. If the influence spread triggered by a seed set meets the requirement, the host can earn the budget from the corresponding client. In this paper, we study the problem of Profit Maximization, considering that different seeds incur different costs. Given all the clients' requirements met, we aim to find the optimal seed allocation with minimum cost. Under the competitive K-LT propagation model, we show the Profit Maximization problem is NP-hard and NP-hard to approximate with any factor. To find a feasible solution, we propose an effective algorithm that iteratively selects a candidate set and obtains an approximate allocation. The experimental results over a real-world dataset validate the effectiveness of the proposed methods.
- Published
- 2021
7. Age-Optimal Scheduling for Heterogeneous Traffic With Timely Throughput Constraints
- Author
-
Sheng Zhou, Zhiyuan Jiang, Lehan Wang, Jingzhou Sun, and Zhisheng Niu
- Subjects
Mathematical optimization ,Job shop scheduling ,Computer Networks and Communications ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Upper and lower bounds ,Scheduling (computing) ,Constraint (information theory) ,Base station ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Throughput (business) ,Weighted arithmetic mean - Abstract
We consider a base station supporting two types of traffics, i.e., status update traffic and timely throughput traffic. The goal is to improve the information freshness of status update traffic while satisfying timely throughput constraints. Age of Information (AoI) is adopted as a metric for information freshness. We first propose an age-aware policy that makes scheduling decisions based on the current value of AoI directly. Given timely throughput constraint, an upper bound of the weighted average AoI under this policy is provided. To evaluate policy performance, it is important to obtain the minimum weighted average AoI achievable given timely throughput constraint. A low complexity method is proposed to estimate a lower bound of this value. Furthermore, inspired by the estimation procedure, we design an age-oblivious policy that does not rely on the current AoI to make scheduling decisions. Surprisingly, simulation results show that the weighted average AoI of the age-oblivious policy is comparable to that of the age-aware policy, and both are close to the lower bound.
- Published
- 2021
8. Error Analysis for Status Update From Sensors With Temporally and Spatially Correlated Observations
- Author
-
Shugong Xu, Sheng Zhou, Zhiyuan Jiang, and Heng Zhang
- Subjects
Queueing theory ,Random field ,Exponential distribution ,business.industry ,Wireless network ,Computer science ,Applied Mathematics ,Real-time computing ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,Electrical and Electronic Engineering ,business ,Random variable ,Wireless sensor network - Abstract
This paper studies the status update performance in wireless sensor networks when status, describing the physical reality that is being sensed, is temporally and spatially correlated. The status is modeled as a time-varying Gauss-Markov Random Field (GMRF), whereby the estimation error of status update at the fusion center is analyzed. The transmission latency introduced by wireless networks is modeled as exponentially distributed random variables. We extend the existing queuing analysis results for Age of Information (AoI) with uncorrelated sources to GMRF in the considered scenario. Closed-form expressions of average remote estimation error are obtained for both one- and two-dimensional GMRFs assuming the exponential time-correlation function, both First-Come First-Served (FCFS) and Last-Come First-Served (LCFS) service disciplines, and a single wireless link. The analytical results are then extended to scenarios wherein multi-packet reception, i.e., multiple concurrent wireless links, is enabled; the difficulty of analyzing obsolete updates in this case is addressed leveraging a reasonable approximation validated by theoretical analysis in the regime where the number of sensors is far more than that of wireless links. Monte-Carlo simulation results are also presented which agree with our theoretical analysis. Based on the results, optimal time and spatial domain sampling rates (e.g., sensor density) can be obtained, providing helpful guidance to wireless sensor deployment.
- Published
- 2021
9. Distributed Task Replication for Vehicular Edge Computing: Performance Analysis and Learning-Based Algorithm
- Author
-
Sheng Zhou, Yuxuan Sun, and Zhisheng Niu
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,business.industry ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Applied Mathematics ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Replication (computing) ,Computer Science Applications ,Task (project management) ,Computer Science - Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Benchmark (computing) ,Wireless ,Electrical and Electronic Engineering ,business ,Algorithm ,Edge computing - Abstract
In a vehicular edge computing (VEC) system, vehicles can share their surplus computation resources to provide cloud computing services. The highly dynamic environment of the vehicular network makes it challenging to guarantee the task offloading delay. To this end, we introduce task replication to the VEC system, where the replicas of a task are offloaded to multiple vehicles at the same time, and the task is completed upon the first response among replicas. First, the impact of the number of task replicas on the offloading delay is characterized, and the optimal number of task replicas is approximated in closed-form. Based on the analytical result, we design a learning-based task replication algorithm (LTRA) with combinatorial multi-armed bandit theory, which works in a distributed manner and can automatically adapt itself to the dynamics of the VEC system. A realistic traffic scenario is used to evaluate the delay performance of the proposed algorithm. Results show that, under our simulation settings, LTRA with an optimized number of task replicas can reduce the average offloading delay by over 30% compared to the benchmark without task replication, and at the same time can improve the task completion ratio from 97% to 99.6%., Submitted to IEEE for possible publication
- Published
- 2021
10. Interpretation of the Protocol for Prevention and Control of COVID-19 in China (Edition 8)
- Author
-
Zhaorui Chang, Zijian Feng, Sheng Zhou, Yanping Zhang, Fengfeng Liu, Zhongjie Li, Liping Wang, Hui Chen, George F. Gao, Mengjie Geng, Lu Ran, and Canjun Zheng
- Subjects
Protocol (science) ,Coronavirus disease 2019 (COVID-19) ,Computer science ,business.industry ,Prevention ,Interpretation (philosophy) ,COVID-19 ,Policy Notes ,Policy ,Control ,Protocol ,General Agricultural and Biological Sciences ,Control (linguistics) ,Software engineering ,business ,China - Published
- 2021
11. Energy-Efficient Massive MIMO With Decentralized Precoder Design
- Author
-
Sheng Zhou, Hangguan Shan, Yu Cheng, Lin Cai, Bo Yin, Shuai Zhang, and Zhisheng Niu
- Subjects
Mathematical optimization ,Optimization problem ,Computer Networks and Communications ,Computer science ,MIMO ,Aerospace Engineering ,Throughput ,Precoding ,Rate of convergence ,Automotive Engineering ,Telecommunications link ,Overhead (computing) ,Quadratic programming ,Electrical and Electronic Engineering ,Power control ,Efficient energy use - Abstract
This paper presents an energy-efficient downlink precoding scheme in a multi-cell Massive MIMO system. We approach the precoder design problem to maximize the system energy efficiency by jointly considering power control, interference management, antenna switching and user throughput in a cluster of base stations. This is computationally difficult as it requires solving a sparsity-inducing non-convex optimization problem, which is NP-hard. To alleviate the solution complexity, first a stochastic smooth approximation of zero-norm is applied in the antenna power management to enable fast, gradient-based algorithms. For efficient convergence, we develop a novel optimization algorithm combining augmented multiplier (AM) and quadratic programming (QP), and show how this scheme permits decentralized implementation by offloading parts of the computation to the individual base stations to reduce communication overhead. We provide theoretical proof that the proposed algorithm converges both locally and globally under realistic assumptions. Numerical results confirm that our method achieves higher energy efficiency with a superior convergence rate compared to different types of existing methods, and illustrate the relationship between energy efficiency performance and system design parameters.
- Published
- 2020
12. Edge Learning with Timeliness Constraints: Challenges and Solutions
- Author
-
Zhisheng Niu, Sheng Zhou, Xiufeng Huang, Yuxuan Sun, and Wenqi Shi
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Networks and Communications ,Computer science ,Distributed computing ,Inference ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Data modeling ,Scheduling (computing) ,Computer Science - Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Resource management ,Augmented reality ,Enhanced Data Rates for GSM Evolution ,Pruning (decision trees) ,Electrical and Electronic Engineering - Abstract
Future machine learning (ML) powered applications, such as autonomous driving and augmented reality, involve training and inference tasks with timeliness requirements and are communication and computation intensive, which demands for the edge learning framework. The real-time requirements drive us to go beyond accuracy for ML. In this article, we introduce the concept of timely edge learning, aiming to achieve accurate training and inference while minimizing the communication and computation delay. We discuss key challenges and propose corresponding solutions from data, model and resource management perspectives to meet the timeliness requirements. Particularly, for edge training, we argue that the total training delay rather than rounds should be considered, and propose data or model compression, and joint device scheduling and resource management schemes for both centralized training and federated learning systems. For edge inference, we explore the dependency between accuracy and delay for communication and computation, and propose dynamic data compression and flexible pruning schemes. Two case studies show that the timeliness performances, including the training accuracy under a given delay budget and the completion ratio of inference tasks within deadline, are highly improved with the proposed solutions., Comment: 7 pages, 5 figures, accepted by IEEE Communications Magazine
- Published
- 2020
13. Analytical solution for the stress field of hierarchical defects: multiscale framework and applications
- Author
-
Baijian Wu, Zhaoxia Li, and Sheng Zhou
- Subjects
Coalescence (physics) ,Stress field ,Partial differential equation ,Interactive effects ,Mechanics of Materials ,Computer science ,Applied Mathematics ,Mechanical Engineering ,Conformal map ,Statistical physics ,Stress concentration - Abstract
Hierarchical defects are defined as adjacent defects at different length scales. Involved are the two scales where the stress field distribution is interrelated. Based on the complex variable method and conformal mapping, a multiscale framework for solving the problems of hierarchical defects is formulated. The separated representations of mapping function, the governing equations of potentials, and the stress field are subsequently obtained. The proposed multiscale framework can be used to solve a variety of simplified engineering problems. The case in point is the analytical solution of a macroscopic elliptic hole with a microscopic circular edge defect. The results indicate that the microscopic defect aggregates the stress concentration on the macroscopic defect and likely leads to global propagation and rupture. Multiple micro-defects have interactive effects on the distribution of the stress field. The level of stress concentration may be reduced by the coalescence of micro-defects. This work provides a unified method to analytically investigate the influence of edge micro-defects within the scope of multiscale hierarchy. The formulated multiscale approach can also be potentially applied to materials with hierarchical defects, such as additive manufacturing and bio-inspired materials.
- Published
- 2020
14. Guest editorial: Time-critical communication and computation for intelligent vehicular networks
- Author
-
Shanzhi Chen, Shan Zhang, Tommy Svensson, and Sheng Zhou
- Subjects
Vehicular ad hoc network ,Computer Networks and Communications ,Computer science ,business.industry ,Computation ,Reliability (computer networking) ,Latency (audio) ,Time critical ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Intelligent transportation system ,5G ,Computer network - Abstract
Vehicular networks are expected to empower auto mated driving and intelligent transportation via vehicle-to-everything (V2X) communications and edge/cloud-assisted computation, and in the meantime Cellular V2X (C-V2X) is gaining wide support from the global industrial ecosystem. The 5G NR-V2X technology is the evolution of LTE-V2X, which is expected to provide ultra-Reliable and Low-Latency Communications (uRLLC) with 1ms latency and 99.999% reliability. Nevertheless, vehicular networks still face great challenges in supporting many emerging time-critical applications, which comprise sensing, communication and computation as closed-loops.
- Published
- 2021
15. Dynamic Compression Ratio Selection for Edge Inference Systems With Hard Deadlines
- Author
-
Sheng Zhou and Xiufeng Huang
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Correctness ,Computer Networks and Communications ,Network packet ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Retransmission ,05 social sciences ,Real-time computing ,Inference ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Machine Learning (cs.LG) ,Computer Science Applications ,0508 media and communications ,Hardware and Architecture ,Server ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Information Systems - Abstract
Implementing machine learning algorithms on Internet of things (IoT) devices has become essential for emerging applications, such as autonomous driving, environment monitoring. But the limitations of computation capability and energy consumption make it difficult to run complex machine learning algorithms on IoT devices, especially when latency deadline exists. One solution is to offload the computation intensive tasks to the edge server. However, the wireless uploading of the raw data is time consuming and may lead to deadline violation. To reduce the communication cost, lossy data compression can be exploited for inference tasks, but may bring more erroneous inference results. In this paper, we propose a dynamic compression ratio selection scheme for edge inference system with hard deadlines. The key idea is to balance the tradeoff between communication cost and inference accuracy. By dynamically selecting the optimal compression ratio with the remaining deadline budgets for queued tasks, more tasks can be timely completed with correct inference under limited communication resources. Furthermore, information augmentation that retransmits less compressed data of task with erroneous inference, is proposed to enhance the accuracy performance. While it is often hard to know the correctness of inference, we use uncertainty to estimate the confidence of the inference, and based on that, jointly optimize the information augmentation and compression ratio selection. Lastly, considering the wireless transmission errors, we further design a retransmission scheme to reduce performance degradation due to packet losses. Simulation results show the performance of the proposed schemes under different deadlines and task arrival rates., 11 pages, 14 figures
- Published
- 2020
16. SENATE: A Permissionless Byzantine Consensus Protocol in Wireless Networks for Real-Time Internet-of-Things Applications
- Author
-
Bhaskar Krishnamachari, Sheng Zhou, Zhiyuan Jiang, Zhisheng Niu, and Zixu Cao
- Subjects
Consensus algorithm ,0209 industrial biotechnology ,Computer Networks and Communications ,Computer science ,Population ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Throughput ,02 engineering and technology ,020901 industrial engineering & automation ,Node (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,education ,Vulnerability (computing) ,education.field_of_study ,business.industry ,Wireless network ,020206 networking & telecommunications ,Computer Science Applications ,Hardware and Architecture ,Proof-of-work system ,Signal Processing ,State (computer science) ,business ,Information Systems ,Computer network - Abstract
The blockchain technology has achieved tremendous success in open (permissionless) decentralized consensus by employing Proof of Work (PoW) or its variants, whereby unauthorized nodes cannot gain a disproportionate impact on consensus beyond their computational power. However, PoW-based systems incur a high delay and low throughput, making them ineffective in dealing with the real-time Internet-of-Things (IoT) applications. On the other hand, the Byzantine fault-tolerant (BFT) consensus algorithms with better delay and throughput performance cannot be employed in permissionless settings due to vulnerability to Sybil attacks. In this article, we present a Sybil-proof wireless network coordinate-based Byzantine consensus (SENATE), which has the merits of both real-time consensus reaching and Sybil-proof, i.e., it is based on the conventional BFT consensus framework yet works in open systems of wireless devices where faulty nodes may launch Sybil attacks. As in a Senate, in the legislature, where the quota of senators per state (district) is a constant irrespective with the population of the state, “senators” in SENATE are selected from participating distributed nodes based on their wireless network coordinates (WNCs) with a fixed number of nodes per district in the WNC space. Elected senators then participate in the subsequent consensus reaching process and broadcast the result. Thereby, the SENATE is a proof against Sybil attacks since pseudonyms of a faulty node are likely to be adjacent in the WNC space and hence fail to be elected. The simulation results reveal that the SENATE can achieve real-time consensus (consensus delay under one second) in a network of hundreds of nodes.
- Published
- 2020
17. SFC-Based Service Provisioning for Reconfigurable Space-Air-Ground Integrated Networks
- Author
-
Guangchao Wang, Sheng Zhou, Shan Zhang, Zhisheng Niu, and Xuemin Shen
- Subjects
Dynamic network analysis ,Computer Networks and Communications ,Wireless network ,Computer science ,Quality of service ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Scheduling (computing) ,0202 electrical engineering, electronic engineering, information engineering ,Resource management ,Electrical and Electronic Engineering ,Greedy algorithm ,Virtual network ,Heterogeneous network - Abstract
Space-air-ground integrated networks (SAGIN) extend the capability of wireless networks and will be the essential building block for many advanced applications, like autonomous driving, earth monitoring, and etc. However, coordinating heterogeneous physical resources is very challenging in such a large-scale dynamic network. In this paper, we propose a reconfigurable service provisioning framework based on service function chaining (SFC) for SAGIN. In SFC, the network functions are virtualized and the service data needs to flow through specific network functions in a predefined sequence. The inherent issue is how to plan the service function chains over large-scale heterogeneous networks, subject to the resource limitations of both communication and computation. Specifically, we must jointly consider the virtual network functions (VNFs) embedding and service data routing. We formulate the SFC planning problem as an integer non-linear programming problem, which is NP-hard. Then, a heuristic greedy algorithm is proposed, which concentrates on leveraging different features of aerial and ground nodes and balancing the resource consumptions. Furthermore, a new metric, aggregation ratio (AR) is proposed to elaborate the communication-computation tradeoff. Extensive simulations shows that our proposed algorithm achieves near-optimal performance. We also find that the SAGIN significantly reduces the service blockage probability and improves the efficiency of resource utilization. Finally, a case study on multiple intersection traffic scheduling is provided to demonstrate the effectiveness of our proposed SFC-based service provisioning framework.
- Published
- 2020
18. Energy-optimal and delay-bounded computation offloading in mobile edge computing with heterogeneous clouds
- Author
-
Xueying Guo, Sheng Zhou, Tianchu Zhao, Zhisheng Niu, Zhiyuan Jiang, and Linqi Song
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Approximation algorithm ,020206 networking & telecommunications ,02 engineering and technology ,Energy consumption ,Task (computing) ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Computation offloading ,Wireless ,020201 artificial intelligence & image processing ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Mobile device - Abstract
By Mobile Edge Computing (MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task offloading in multi-user MEC systems with heterogeneous clouds, including edge clouds and remote clouds. Tasks are forwarded from mobile devices to edge clouds via wireless channels, and they can be further forwarded to remote clouds via the Internet. Our objective is to minimize the total energy consumption of multiple mobile devices, subject to bounded-delay requirements of tasks. Based on dynamic programming, we propose an algorithm that minimizes the energy consumption, by jointly allocating bandwidth and computational resources to mobile devices. The algorithm is of pseudo-polynomial complexity. To further reduce the complexity, we propose an approximation algorithm with energy discretization, and its total energy consumption is proved to be within a bounded gap from the optimum. Simulation results show that, nearly 82.7% energy of mobile devices can be saved by task offloading compared with mobile device execution.
- Published
- 2020
19. RETRACTED ARTICLE: Design and implementation of bank CRM system based on decision tree algorithm
- Author
-
Caixia Chen, Liwei Geng, and Sheng Zhou
- Subjects
0209 industrial biotechnology ,Operations research ,Computer science ,business.industry ,Decision tree learning ,Decision tree ,02 engineering and technology ,Customer relationship management ,020901 industrial engineering & automation ,Artificial Intelligence ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,Office automation ,020201 artificial intelligence & image processing ,Customer satisfaction ,Data pre-processing ,Basic needs ,business ,Software - Abstract
With the rapid development of the national economy, the level of informationization and office automation of banks has gradually increased. At the same time, with the development of bank management concepts, the traditional bank customer relationship management (CRM) method has been unable to meet the basic needs of bank development. The purpose of this paper is to design and implement a bank CRM system based on decision tree algorithm. This paper uses decision tree technology, data preprocessing, and other technologies for data mining. Based on the mining results, a customer relationship management system is designed based on the scalability and maintainability of the system. Finally, the system is designed, implemented, and performed functional tests. The experimental results show that the system can fully exploit the consumer demand and consumption habits of existing customers and improve customer satisfaction. In addition, it can adapt to the complex banking information system environment, with sufficient computing power and high accuracy can provide valuable information for bank decision makers. The data mining performance of the system was tested, and the time for running 1 million data volumes was 650 s. It can be seen that the system has excellent running performance.
- Published
- 2020
20. DGE: Deep Generative Network Embedding Based on Commonality and Individuality
- Author
-
Jiajun Bu, Xin Wang, Martin Ester, Pinggang Yu, Jiawei Chen, Qihao Shi, Sheng Zhou, and Can Wang
- Subjects
Theoretical computer science ,Computer science ,Node (networking) ,Bayesian probability ,02 engineering and technology ,General Medicine ,Space (commercial competition) ,Network topology ,Variety (cybernetics) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,020201 artificial intelligence & image processing ,Generative grammar ,Network analysis - Abstract
Network embedding plays a crucial role in network analysis to provide effective representations for a variety of learning tasks. Existing attributed network embedding methods mainly focus on preserving the observed node attributes and network topology in the latent embedding space, with the assumption that nodes connected through edges will share similar attributes. However, our empirical analysis of real-world datasets shows that there exist both commonality and individuality between node attributes and network topology. On the one hand, similar nodes are expected to share similar attributes and have edges connecting them (commonality). On the other hand, each information source may maintain individual differences as well (individuality). Simultaneously capturing commonality and individuality is very challenging due to their exclusive nature and existing work fail to do so. In this paper, we propose a deep generative embedding (DGE) framework which simultaneously captures commonality and individuality between network topology and node attributes in a generative process. Stochastic gradient variational Bayesian (SGVB) optimization is employed to infer model parameters as well as the node embeddings. Extensive experiments on four real-world datasets show the superiority of our proposed DGE framework in various tasks including node classification and link prediction.
- Published
- 2020
21. Fractional Dynamic Caching: A Collaborative Design of Storage and Backhaul
- Author
-
Liumeng Wang and Sheng Zhou
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Aerospace Engineering ,Backhaul (telecommunications) ,Base station ,Automotive Engineering ,Data_FILES ,Cache ,Collaborative design ,Electrical and Electronic Engineering ,Dynamic storage ,business ,Computer network - Abstract
To reduce the average file delivery time under limited backhaul bandwidth and cache storage, we propose a fractional dynamic caching scheme, that coordinates the utilization of the backhaul and the cache storage. In the proposed scheme, only part of each file is pre-fetched and stored in the static storage segment of small base stations (SBSs), while remaining part of the file will be fetched into the dynamic storage segment when requested by users. We aim to minimize the average file delivery time by investigating the optimal prefetching policy, i.e., the size of the pre-fetched part of each file and the size of the dynamic storage. We formulate the file delivery time minimization problem as a convex optimization formulation. We compute the closed-form of the optimal pre-fetching policy when the wireless rate is constant. Then, a heuristic pre-fetching policy is derived, using the conditional average wireless rate. To overcome the issue that file popularity may be unknown or time-varying, we further provide an approximation of the heuristic pre-fetching policy which is irrelevant to the file popularity. We also discuss on the potential application of the proposed caching scheme in scenarios with heterogeneous file sizes. Numerical results show that the proposed scheme can substantially improve the backhaul utilization efficiency, and reduce the average file delivery time. The heuristic pre-fetching policy has similar performance with the optimal pre-fetching policy yet with much lower complexity.
- Published
- 2020
22. Retracted Article: The role of computer security management in preventing financial technology risks
- Author
-
Sheng Zhou, Caixia Chen, and Qingqing Chang
- Subjects
business.industry ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,FinTech ,Risk analysis (engineering) ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Risk prevention ,business ,Software ,Financial services - Abstract
With the continuous development and progress of society, computers have begun to be applied in various fields, especially in the financial industry. However, while developing, it will also bring ce...
- Published
- 2020
23. Closed-Form Whittle’s Index-Enabled Random Access for Timely Status Update
- Author
-
Bhaskar Krishnamachari, Zhiyuan Jiang, Jingzhou Sun, Zhisheng Niu, and Sheng Zhou
- Subjects
060102 archaeology ,Job shop scheduling ,Computer science ,Wireless network ,business.industry ,Network packet ,020206 networking & telecommunications ,06 humanities and the arts ,02 engineering and technology ,Scheduling (computing) ,Bernoulli's principle ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,0601 history and archaeology ,Electrical and Electronic Engineering ,business ,Random access ,Computer network ,Communication channel - Abstract
We consider a star-topology wireless network for status update where a central node collects status data from a large number of distributed machine-type terminals that share a wireless medium. The Age of Information (AoI) minimization scheduling problem is formulated by the restless multi-armed bandit. A widely-proven near-optimal solution, i.e., the Whittle’s index, is derived in closed-form and the corresponding indexability is established. The index is then generalized to incorporate stochastic, periodic packet arrivals and unreliable channels. Inspired by the index scheduling policies which achieve near-optimal AoI but require heavy signaling overhead, a contention-based random access scheme, namely Index-Prioritized Random Access (IPRA), is further proposed. Based on IPRA, terminals that are not urgent to update, indicated by their indices, are barred access to the wireless medium, thus improving the access timeliness. A computer-based simulation shows that IPRA’s performance is close to the optimal AoI in this setting and outperforms standard random access schemes. Also, for applications with hard AoI deadlines, we provide reliable deadline guarantee analysis. Closed-form achievable AoI stationary distributions under Bernoulli packet arrivals are derived such that AoI deadline with high reliability can be ensured by calculating the maximum number of supportable terminals and allocating system resources proportionally.
- Published
- 2020
24. Flexible Functional Split and Power Control for Energy Harvesting Cloud Radio Access Networks
- Author
-
Sheng Zhou and Liumeng Wang
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,business.industry ,Computer science ,Heuristic (computer science) ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Applied Mathematics ,Distributed computing ,020206 networking & telecommunications ,Cloud computing ,Throughput ,02 engineering and technology ,Grid ,Computer Science Applications ,Renewable energy ,Computer Science - Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,Markov decision process ,Electrical and Electronic Engineering ,business ,Throughput (business) ,Energy (signal processing) ,Power control - Abstract
Functional split is a promising technique to flexibly balance the processing cost at remote ends and the fronthaul rate in cloud radio access networks (C-RAN). By harvesting renewable energy, remote radio units (RRUs) can save grid power and be flexibly deployed. However, the randomness of energy arrival poses a major design challenge. To maximize the throughput under the average fronthaul rate constraint in C-RAN with renewable powered RRUs, we first study the offline problem of selecting the optimal functional split modes and the corresponding durations, jointly with the transmission power. We find that between successive energy arrivals, at most two functional split modes should be selected. Then the optimal online problem is formulated as an Markov decision process (MDP). To deal with the curse of dimensionality of solving MDP, we further analyze the special case with one instance of energy arrival and two candidate functional split modes as inspired by the offline solution, and then a heuristic online policy is proposed. Numerical results show that with flexible functional split, the throughput can be significantly improved compared with fixed functional split. Also, the proposed heuristic online policy has similar performance with the optimal online one, as validated by simulations., Comment: Accepted by IEEE Trans. Wireless Commun
- Published
- 2020
25. Near-Optimal MIMO-SCMA Uplink Detection With Low-Complexity Expectation Propagation
- Author
-
Shouyi Yin, Pan Wang, Guiqiang Peng, Sheng Zhou, Shaojun Wei, and Leibo Liu
- Subjects
Computer science ,Applied Mathematics ,MIMO ,Initialization ,020206 networking & telecommunications ,02 engineering and technology ,Spectral efficiency ,Computer Science Applications ,QR decomposition ,symbols.namesake ,Robustness (computer science) ,Expectation propagation ,Telecommunications link ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Electrical and Electronic Engineering ,Rayleigh scattering ,Algorithm ,Factor graph ,Computer Science::Information Theory ,Communication channel - Abstract
Multiple-input multiple-output (MIMO) and sparse code multiple access (SCMA) can be combined to achieve higher spectrum efficiency and more access for users, which also introduces more difficulties in signal detection. This paper explores low-complexity and low-latency iterative algorithms for soft symbol detection in an uplink MIMO-SCMA system over Rayleigh flat-fading channels. An expectation propagation framework (EPA) based on the extended factor graph is developed for MIMO-SCMA with multiantenna users. A new initialization method is proposed to accelerate convergence. Moreover, the SC-EPA with lower complexity is proposed by introducing QR decomposition and RE cluster-based decentralized factor node (FN) processing. Furthermore, new approaches for message passing between variable nodes (VNs) and FNs are proposed to improve the parallelism and reduce the complexity of the algorithm. The complexity of SC-EPA scales linearly with constellation size $\Omega $ ( $\Omega ) and is independent of the receiving antenna $\text {N}_{\text {r}}$ without any performance penalties. The robustness of the proposed algorithm in imperfect channels is evaluated, and the state evolution (SE) of the SC-EPA is derived. The link-level simulation results demonstrate that the EPA and SC-EPA receivers can achieve nearly the same performance as state of-the-art methods but with much lower complexity.
- Published
- 2020
26. A 2.92-Gb/s/W and 0.43-Gb/s/MG Flexible and Scalable CGRA-Based Baseband Processor for Massive MIMO Detection
- Author
-
Sheng Zhou, Shouyi Yin, Guiqiang Peng, Shaojun Wei, and Leibo Liu
- Subjects
Computer science ,020208 electrical & electronic engineering ,Fast Fourier transform ,MIMO ,Context (language use) ,Systolic array ,02 engineering and technology ,Chip ,Computational science ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Baseband ,Electrical and Electronic Engineering ,Baseband processor - Abstract
Communication systems’ development requires service customization in aspects, such as standards, multiple-input multiple-output (MIMO) scales, and algorithms. The existing hardware designs for massive MIMO detection have difficulty in achieving both high flexibility and scalability with high hardware efficiency. This article proposes a baseband processor based on a dynamic coarse-grained reconfigurable array (CGRA) for massive MIMO detection. To efficiently support various algorithm features and requirements, three optimization techniques are proposed to achieve high flexibility and scalability. First, an on-demand matrix–vector systolic array is proposed to enable flexible and scalable matrix and vector operations, reducing memory accesses by 82%. Second, distributed multi-interaction data storage is designed for flexible data access and reusability. Finally, a continuable adaptive context information format is proposed to support different bit widths, operations, and extensions of MIMO systems, reducing context information by 67%. These techniques achieve the improvements of 1.33 $\times $ , 1.34 $\times $ , and 1.29 $\times $ in energy efficiency and 1.21 $\times $ , 1.18 $\times $ , and 1.18 $\times $ in area efficiency, evaluated by removing one technique at a time from the proposed architecture. Fabricated in a 28-nm CMOS technology, the chip achieves high flexibility and scalability in supporting various detection algorithms; various MIMO scales, such as $4\,\,\times $ 4, 32 $\times $ 32, and 128 $\times $ 8; and baseband processing tasks, such as filtering and fast Fourier transformation. When benchmarked on various detection algorithms, the processor achieves 1.64–2.92-Gb/s/W energy efficiency and 0.25–0.43-Gb/s/MG area efficiency, which are 2.78–28.54 $\times $ and 2.05–14.43 $\times $ those of state-of-the-art programmable designs, respectively. To our knowledge, this is the first flexible and scalable CGRA-based baseband processor for massive MIMO detection.
- Published
- 2020
27. Image Target Detection Algorithm of Smart City Management Cases
- Author
-
Kedun Mao, Ping Tan, and Sheng Zhou
- Subjects
Smart city ,Offset (computer science) ,General Computer Science ,Computer science ,target detection ,algorithm research ,General Engineering ,Mode (statistics) ,Image processing ,Filter (signal processing) ,Image (mathematics) ,accuracy comparison ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Noise (video) ,Graphics ,lcsh:TK1-9971 ,Algorithm - Abstract
With the rapid development and wide application of Internet technology, the use of the concept of “smart city” has been concerned and promoted. It is an inevitable way to strengthen the breadth and depth of urban services, and to move forward from digital to intelligent application. The purpose of this study is to further promote the small and medium-sized cities in China to improve the “smart city” management mode and promote the harmonious development of society, which has important reference and practical significance. In this paper, in-depth analysis of the background of smart city, different image target detection algorithms are studied. The infrared target detection algorithm suppresses the background by means of a high-pass filter, and the coefficient of correlation between the characteristics is used as the fusion weight, while the weighted grey synthesis is performed, area and seroid offset. The ultra-spectral target detection algorithm extracts some content indicators from the initial data, and finally realizes the optimization of the algorithm. The mean filtering algorithm can reduce the effect of noise by pre-processing the image. The algorithm a hog-target detection describes the features of the object's surface edges in areas such as graphics and image processing; and calculates the distribution of characteristics in the direction of inclination of the particular part of the image. These algorithms have their own advantages and characteristics. The results of the experiment show that the accuracy and rate of recall of the infrared target detection algorithm after aggregation of characteristics are higher than other algorithms, the accuracy is higher than 6.3% of the original infrared image algorithm, and the recall rate is 5.4% higher than the infrared image anide algorithm. The change in the value m of the main vector dimension will affect the accuracy of target detection.
- Published
- 2020
28. Fast Recognition Method of Football Robot’s Graphics From the VR Perspective
- Author
-
Liang Wang, Ying Liu, Jie Zhang, Sheng Zhou, Zhen Bai, and Yuan Cao
- Subjects
General Computer Science ,Computer science ,business.industry ,Machine vision ,Perspective (graphical) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Football ,Shadow ,Robot ,Data verification ,General Materials Science ,Color filter array ,Computer vision ,Artificial intelligence ,Graphics ,business - Abstract
The purpose of this article is to identify football and related environmental variables through its VR images under the current situation where the vision system has become the only way for football robots to obtain the external environment, so as to improve the chances of winning the game. First, this article uses color filters to enhance the VR football data to distinguish between shadow games and aliens. The best environment for enhancing the image is automatically determined by the Ostu method, so that the image is not affected by shadows as much as possible, and the outline of the image can be sealed. At the same time, using the humanoid medium-sized football game machine system as the platform, the relevant processing algorithms of the humanoid football robot front-view system are studied to realize the work of color image segmentation, edge extraction, straight line extraction, cross-line recognition and target post-recognition. PA-SIFT algorithm is used to quickly identify the graphics. Data verification results show that the recognition rate of the PA-SIFT algorithm can reach 96%, ensuring the real-time and feasibility of the algorithm. In addition, the divide-and-conquer algorithm and the related processing algorithm of the vision system are combined to determine the central area of the image, so that the algorithm is not affected by the external environment, and the algorithm is robust and can improve actual competition.
- Published
- 2020
29. A ROBUST APPROACH FOR THREE-DIMENSIONAL REAL-TIME TARGET LOCALIZATION UNDER AMBIGUOUS WALL PARAMETERS
- Author
-
Cheng Xu, Sheng Zhou, Hua-Mei Zhang, and and Jiao Jie Zhang
- Subjects
Computer science ,Condensed Matter Physics ,Electronic, Optical and Magnetic Materials - Published
- 2020
30. A REAL-TIME AUTOMATIC METHOD FOR TARGET LOCATING UNDER UNKNOWN WALL CHARACTERISTICS IN THROUGH-WALL IMAGING
- Author
-
Cheng Xu, Sheng Zhou, Hua-Mei Zhang, and and Ye-Rong Zhang
- Subjects
Computer science ,business.industry ,Computer vision ,Artificial intelligence ,Through wall imaging ,Condensed Matter Physics ,business ,Electronic, Optical and Magnetic Materials - Published
- 2020
31. Energy- and Area-Efficient Recursive-Conjugate-Gradient-Based MMSE Detector for Massive MIMO Systems
- Author
-
Guiqiang Peng, Qiushi Wei, Leibo Liu, Shouyi Yin, Pan Wang, Sheng Zhou, and Shaojun Wei
- Subjects
Computational complexity theory ,Computer science ,MIMO ,Detector ,020206 networking & telecommunications ,02 engineering and technology ,Chip ,Parallel processing (DSP implementation) ,Conjugate gradient method ,Likelihood-ratio test ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Algorithm ,Throughput (business) ,Energy (signal processing) - Abstract
Minimum-mean-square-error (MMSE) detection is increasingly relevant for massive multiple-input multiple-output (MIMO) systems. MMSE suffers from high computational complexity and low parallelism because of the increasing number of users and antennas in massive MIMO systems. This paper proposes a recursive conjugate gradient (RCG) method to iteratively estimate signals. First, a recursive conjugate gradient detection algorithm is proposed that achieves high parallelism and low complexity through iteration. Second, a quadrant-certain-based initial method that improves detection accuracy without added complexity is proposed. Third, an approximated log likelihood ratio (LLR) computation method is proposed to achieve simplified calculation. The analyses show that compared with related methods, the proposed RCG algorithm reduces computational complexity and exploits the potential parallelism. RCG is mathematically demonstrated to achieve low approximated error. Based on the RCG method, an architecture is proposed in a 128 × 8 64-QAM massive MIMO system. First, a parallel processing element array with single-sided input is adopted; this array eliminates the throughput limitation. Second, a deeply pipelined user-level method based on the recursive conjugate gradient method is proposed. Third, an approximated architecture is proposed to compute the soft output. The architecture is verified on an FPGA and fabricated on 1.87 × 1.87 mm $^2$ silicon with TSMC 65 nm CMOS technology. The chip achieves 2.69 Mbps/mW and 1.09 Mbps/kG energy efficiency (throughput/power) and area efficiency (throughput/area), respectively, which are 2.39 to 10.60× and 1.15 to 8.81× those of the normalized state-of-the-art designs.
- Published
- 2020
32. Deep Learning–Based Coverage and Capacity Optimization
- Author
-
Sheng Zhou, Zhisheng Niu, Zhiyuan Jiang, Andrei Marinescu, and Luiz A. DaSilva
- Subjects
Set (abstract data type) ,Capacity optimization ,Artificial neural network ,Wireless network ,business.industry ,Computer science ,Distributed computing ,Deep learning ,Inference ,Reinforcement learning ,Artificial intelligence ,business ,Domain (software engineering) - Abstract
This chapter presents two state‐of‐the‐art machine learning (ML)‐based techniques that tackle the coverage and capacity optimization (CCO) problem from each of the main aspects: configuring base‐station parameters to address current demand through a deep neural network architecture, where suitable configurations actions are taken on the basis of the inference from current network user geometry information; and enabling base‐station sleeping via a data‐driven approach by using deep reinforcement learning (RL), which leverages network traffic models to address the non‐stationarity in real‐world traffic. The chapter introduces a set of widely used ML techniques and provides an overview of their application to CCO problems in the wireless network domain. It then describes the used and achieved result of the deep RL approach in solving the problem of base‐station sleeping. The chapter also presents the application and evaluation of the multi‐agent deep neural network framework on the dynamic frequency reuse problem in mobile networks.
- Published
- 2019
33. IEEE Transactions on Cognitive Communications and Networking
- Author
-
Sheng Zhou, Yan Liu, Luiz A. DaSilva, Zhisheng Niu, Zhiyuan Jiang, and Jernej Hribar
- Subjects
Technology ,Computer Networks and Communications ,business.industry ,Network packet ,Computer science ,multi-agent reinforcement learning ,Quality of service ,Distributed computing ,Access control ,medium access control ,contention-based random access ,Internet-of-Things ,Artificial Intelligence ,Hardware and Architecture ,Robustness (computer science) ,Scalability ,Convergence (routing) ,Telecommunications ,State space ,Reinforcement learning ,MULTIPLE-ACCESS ,business ,WIRELESS ,Markov decision process - Abstract
In future wireless systems, latency of information needs to be minimized to satisfy the requirements of many mission-critical applications. Meanwhile, not all terminals carry equally-urgent packets given their distinct situations, e.g., status freshness. Leveraging this feature, we propose an on-demand Medium Access Control (MAC) scheme, whereby each terminal transmits with dynamically adjusted aggressiveness based on its situations which are modeled as Markov states. A Multi-Agent Reinforcement Learning (MARL) framework is utilized and each agent is trained with a Deep Deterministic Policy Gradient (DDPG) network. A notorious issue for MARL is slow and non-scalable convergence – to address this, a new Situationally-aware MARL-based Transmissions (SMART) scheme is proposed. It is shown that SMART can significantly shorten the convergence time and the converged performance is also dramatically improved compared with state-of-the-art DDPG-based MARL schemes, at the expense of an additional offline training stage. SMART also outperforms conventional MAC schemes significantly, e.g., Carrier Sensing and Multiple Access (CSMA), in terms of average and peak Age of Information (AoI). In addition, SMART also has the advantage of versatility – different Quality-of-Service (QoS) metrics and hence various state space definitions are tested in extensive simulations, where SMART shows robustness and scalability in all considered scenarios. Accepted version
- Published
- 2021
34. Device Scheduling and Resource Allocation for Federated Learning under Delay and Energy Constraints
- Author
-
Yuxuan Sun, Zhisheng Niu, Wenqi Shi, and Sheng Zhou
- Subjects
Bandwidth allocation ,Computer science ,Wireless network ,Distributed computing ,Resource allocation ,Enhanced Data Rates for GSM Evolution ,Maximization ,Energy consumption ,Mobile device ,Scheduling (computing) - Abstract
Federated Learning (FL) is an emerging technique to enhance edge intelligence, where mobile devices train machine learning models collaboratively with their local data. Limited energy on devices and scarce wireless bandwidth can notably impact the convergence of FL over wireless networks, and thus device scheduling and resource allocation are critical. In this paper, we propose a joint device scheduling and resource allocation scheme to maximize the model accuracy under total training delay and device energy budgets. Since FL consists of multiple training rounds, there is an inherent trade-off between per-round delay, per-round energy consumption, and the total number of rounds. To find solution, we decouple the accuracy maximization problem into two sub-problems. First, given a scheduling policy, the bandwidth allocation and local computing frequency are jointly optimized to maximize the number of rounds that can be conducted. Then, a device scheduling policy is proposed to balance the trade-off between the per-round energy and delay cost and the number of rounds, with the ultimate goal of accuracy optimization. Experiments on various learning tasks and datasets show that the proposed scheme can greatly improve the convergence rate of resource-constrained FL.
- Published
- 2021
35. A UoI-Optimal Policy for Timely Status Updates with Resource Constraint
- Author
-
Lehan Wang, Jingzhou Sun, Yuxuan Sun, Zhisheng Niu, and Sheng Zhou
- Subjects
Mathematical optimization ,reinforcement learning ,Linear programming ,Computer science ,Process (engineering) ,Science ,Physics ,QC1-999 ,constrained Markov decision process ,context-awareness ,General Physics and Astronomy ,Context (language use) ,Astrophysics ,Article ,System model ,Scheduling (computing) ,QB460-466 ,age of information ,Reinforcement learning ,Context awareness ,Markov decision process ,timely status updates - Abstract
Timely status updates are critical in remote control systems such as autonomous driving and the industrial Internet of Things, where timeliness requirements are usually context dependent. Accordingly, the Urgency of Information (UoI) has been proposed beyond the well-known Age of Information (AoI) by further including context-aware weights which indicate whether the monitored process is in an emergency. However, the optimal updating and scheduling strategies in terms of UoI remain open. In this paper, we propose a UoI-optimal updating policy for timely status information with resource constraint. We first formulate the problem in a constrained Markov decision process and prove that the UoI-optimal policy has a threshold structure. When the context-aware weights are known, we propose a numerical method based on linear programming. When the weights are unknown, we further design a reinforcement learning (RL)-based scheduling policy. The simulation reveals that the threshold of the UoI-optimal policy increases as the resource constraint tightens. In addition, the UoI-optimal policy outperforms the AoI-optimal policy in terms of average squared estimation error, and the proposed RL-based updating policy achieves a near-optimal performance without the advanced knowledge of the system model.
- Published
- 2021
- Full Text
- View/download PDF
36. Freeform surface adaptive interferometry assisted with simulated annealing-hill climbing algorithm
- Author
-
Sheng Zhou, Renhu Liu, Benli Yu, Zhongtao Cheng, Lei Zhang, Jingsong Li, and Jinling Wu
- Subjects
Surface (mathematics) ,Computer science ,Applied Mathematics ,020208 electrical & electronic engineering ,010401 analytical chemistry ,02 engineering and technology ,Condensed Matter Physics ,01 natural sciences ,0104 chemical sciences ,Interferometry ,Surface metrology ,Adaptive system ,Convergence (routing) ,Simulated annealing ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Gradient descent ,Instrumentation ,Hill climbing ,Algorithm - Abstract
The freeform surface adaptive interferometer (FSAI) recently has been employed to realize the unknown freeform surface metrology. A near null interferogram should be acquired from the initial interferogram with undistinguished fringes even dark areas. The direct optimization object in FSAI is just the interferogram rather than the focusing intensity characterization in traditional wavefront-sensorless (WFS) adaptive systems. The simulated annealing-hill climbing (SA-HC) mixed algorithm is employed in the FSAI, which has much better convergence than the stochastic parallel gradient descent (SPGD) algorithm, almost as much as the genetic algorithm (GA). At the same time, it is much faster than GA and thus applicable to the test of volume-produced in the optical shop. Simulations and experiments validating the algorithm feasibility are presented.
- Published
- 2021
37. On the Capacity of Privacy-Preserving and Straggler-Robust Distributed Coded Computing
- Author
-
Qicheng Zeng and Sheng Zhou
- Subjects
Polynomial ,Information privacy ,Theoretical computer science ,Degree (graph theory) ,Distributed database ,Computer science ,business.industry ,Lagrange polynomial ,Cloud computing ,Function (mathematics) ,symbols.namesake ,Encoding (memory) ,symbols ,business - Abstract
Distributed computing can well exploit the computation resources in edge and cloud for many applications of large-scale machine learning, which also raises concerns on data privacy and straggling effect. A promising method to address these issues is using codes. In our work, we design a general computation framework that incorporates multi-stage computing tasks with multiple inputs and can be expressed as a multi-variable arbitrary-degree polynomial function $f,$ with $N$ distributed servers as workers, over a batch of data $D$ that consists of data from different sources. We propose a privacy-preserving and straggler-robust coding scheme based on Lagrange polynomials, which can address up to $S$ straggling workers and up to $L$ colluding workers. We prove the optimality of the proposed scheme in terms of downlink communication efficiency, defined as the amount of bits of desired results versus that of the downloading results, and obtain an explicit expression of the capacity: $C= \frac{N-{S}-d(L-1)-1}{d(N-S)},$ which is the supremum of downlink communication efficiency over all feasible encoding schemes, and $d$ is the degree of function f.
- Published
- 2021
38. The Impact of Interference Reflection on Reconfigurable Intelligent Surface-Aided Directional Transmissions
- Author
-
Sheng Zhou, Zhisheng Niu, and Yining Xu
- Subjects
Beamwidth ,Base station ,Single antenna interference cancellation ,Computer science ,Electronic engineering ,Network performance ,Spectral efficiency ,Energy consumption ,Interference (wave propagation) ,Efficient energy use - Abstract
As a promising solution to deal with the blockage-sensitivity of millimeter wave band and to reduce the energy consumption caused by network densification, reconfigurable intelligent surface (RIS) shows good potential in improving the network performance. However, deploying large scale RISs leads to non-negligible reflection of interference, which may reduce the performance gain brought by RISs. In this paper, we investigate the coverage, spectrum efficiency and energy efficiency performance of a downlink directional transmission network with the aid of RISs using stochastic geometry. The numerical results show that deploying suitable density of RISs enhances the network performance while overly dense deployment brings performance degradation. We also find that the optimal RIS deployment fraction is a monotonically non-decreasing function of the blockage density and is monotonically non-increasing functions of the base station density and the beamwidth, which indicates that the deployment of RISs should be properly designed according to the network status instead of ‘the more, the better’.
- Published
- 2021
39. Closed-Form Analysis of Non-Linear Age of Information in Status Updates With an Energy Harvesting Transmitter
- Author
-
Sheng Zhou, Zhiyuan Jiang, Zhisheng Niu, and Xi Zheng
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Mathematical optimization ,Exponential distribution ,Computer science ,Network packet ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Poisson distribution ,Computer Science Applications ,Computer Science - Networking and Internet Architecture ,symbols.namesake ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,Matrix geometric method ,symbols ,Penalty method ,Electrical and Electronic Engineering ,Queue ,Energy (signal processing) - Abstract
Timely status updates are crucial to enabling applications in massive Internet of Things (IoT). This paper measures the data-freshness performance of a status update system with an energy harvesting transmitter, considering the randomness in information generation, transmission and energy harvesting. The performance is evaluated by a non-linear function of age of information (AoI) that is defined as the time elapsed since the generation of the most up-to-date status information at the receiver. The system is formulated as two queues with status packet generation and energy arrivals both assumed to be Poisson processes. With negligible service time, both First-Come-First-Served (FCFS) and Last-Come-First-Served (LCFS) disciplines for arbitrary buffer and battery capacities are considered, and a method for calculating the average penalty with non-linear penalty functions is proposed. The average AoI, the average penalty under exponential penalty function, and AoI's threshold violation probability are obtained in closed form. When the service time is assumed to follow exponential distribution, matrix geometric method is used to obtain the average peak AoI. The results illustrate that under the FCFS discipline, the status update frequency needs to be carefully chosen according to the service rate and energy arrival rate in order to minimize the average penalty., Comment: Accepted by IEEE Transactions on Wireless Communications
- Published
- 2019
40. Intermittent CSI Update for Massive MIMO Systems With Heterogeneous User Mobility
- Author
-
Zhiyuan Jiang, Sheng Zhou, Zhisheng Niu, and Ruichen Deng
- Subjects
Mathematical optimization ,Computer science ,MIMO ,020206 networking & telecommunications ,020302 automobile design & engineering ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Multi-user MIMO ,0203 mechanical engineering ,Channel state information ,Convex optimization ,0202 electrical engineering, electronic engineering, information engineering ,Resource allocation ,Markov decision process ,Electrical and Electronic Engineering ,Computer Science::Information Theory ,Communication channel - Abstract
The high density and heterogeneous mobility of users in many applications pose challenges for the channel acquisition in massive multiple-input-multiple-output (MIMO) systems. For such scenarios, we propose an intermittent channel estimation (ICE) scheme to save pilot resources, which utilizes the aged channel state information (CSI) based on the temporal correlations of user channels. The optimal CSI update pattern to maximize the achievable sum rate is obtained by solving a formulated multichain Markov decision process (MDP), which is denoted by ICE-MDP. Furthermore, to reduce the computational complexity of the MDP, we relax the constraint of the CSI update pattern design problem and convert it into a convex optimization problem, whose solution is denoted by ICE-CVX. The simulations validate the close-to-optimal performance and the computational efficiency of ICE-CVX and show that the ICE scheme can significantly outperform a conventional scheme which persistently updates the CSI of all users.
- Published
- 2019
41. Learning-Based Remote Channel Inference: Feasibility Analysis and Case Study
- Author
-
Zhisheng Niu, Sheng Chen, Sheng Zhou, Luiz A. DaSilva, Ziyan He, Andrei Marinescu, and Zhiyuan Jiang
- Subjects
business.industry ,Computer science ,Applied Mathematics ,MIMO ,Inference ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Mutual information ,Upper and lower bounds ,Computer Science Applications ,Base station ,Computer engineering ,Channel state information ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,Electrical and Electronic Engineering ,business ,Heterogeneous network ,Computer Science::Information Theory ,Communication channel - Abstract
Channel state information (CSI) plays a vital role in wireless communication systems. However, the CSI acquisition overhead is an enormous obstacle to realize the system performance improvements promised by massive connectivity and massive multiple-input-multiple-output (MIMO). To alleviate this overhead, this paper proposes a remote channel inference framework by probing the channels occupied by a source base station (BS) and inferring the channels of target BSs at geographically separated sites. The work generalizes existing literature which mainly focuses on utilizing the CSI linear correlations of adjacent antennas, by adopting a model-free deep learning framework to investigate non-linear dependence among remote CSI. The existence of such cross-BS CSI dependence is first shown by calculating the mutual information between remote channels, and the Cramer-Rao lower bound of remote CSI inference performance based on a one-ring channel model. Inspired by this finding, modern deep learning approaches are leveraged to perform remote channel inference in heterogeneous networks for both single user and multi-user scenarios. The simulation results based on ray tracing data show evident performance advantages over conventional methods, under both homogeneous and heterogeneous frequency coverage. The proposed framework achieves beamformer inference accuracy within 4.6% of the genie-aided optimum at the cost of sweeping only two beams.
- Published
- 2019
42. An improved interwell connectivity model to obtain interwell connectivity information by using complex well data
- Author
-
Shun Liu, Lin Cao, De-sheng Zhou, Heng He, and Tao Yang
- Subjects
Well test (oil and gas) ,Computer simulation ,Computer science ,Modeling and Simulation ,Interference (wave propagation) ,Computer Graphics and Computer-Aided Design ,Algorithm ,History matching ,Software ,Field (geography) - Abstract
Information on interwell connectivity is important in reservoir field dynamic analysis. However, all conventional methods, such as the tracer test, interference well test, and numerical simulation, have disadvantages. These disadvantages include the length of time taken, high costs, and the effect on oilfield production. Thus, research focus has been directed toward the development of approaches that use production and injection data to obtain interwell connectivity data. Prevailing interwell connectivity models are sensitive to shut-ins, and their corresponding inversion methods are unreliable. The improved interwell connectivity model presented in this study exhibits enhanced robustness to shut-ins. The application of seepage theories and numerical simulation methods enables the main model parameters to absorb prior geological knowledge to characterize the reservoir and improve the initial estimate of connectivity. Corresponding inverse methods for model parameters are implemented based on Bayesian inverse theory and the projection gradient method and obtain greater robustness for the model parameter, compared with those in previous studies. Testing of a heterogeneous synthetic reservoir and a Z16 reservoir block demonstrates that the methodology can precisely determine the interwell connectivity and can be used in real oilfields.
- Published
- 2019
43. Exploiting Moving Intelligence: Delay-Optimized Computation Offloading in Vehicular Fog Networks
- Author
-
Yuxuan Sun, Zhiyuan Jiang, Sheng Zhou, and Zhisheng Niu
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Replication (computing) ,Computer Science Applications ,Task (project management) ,Shared resource ,Computer Science - Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Wireless ,Computation offloading ,Resource management ,Electrical and Electronic Engineering ,business ,Resource management (computing) - Abstract
Future vehicles will have rich computing resources to support autonomous driving and be connected by wireless technologies. Vehicular fog networks (VeFN) have thus emerged to enable computing resource sharing via computation task offloading, providing wide range of fog applications. However, the high mobility of vehicles makes it hard to guarantee the delay that accounts for both communication and computation throughout the whole task offloading procedure. In this article, we first review the state-of-the-art of task offloading in VeFN, and argue that mobility is not only an obstacle for timely computing in VeFN, but can also benefit the delay performance. We then identify machine learning and coded computing as key enabling technologies to address and exploit mobility in VeFN. Case studies are provided to illustrate how to adapt learning algorithms to fit for the dynamic environment in VeFN, and how to exploit the mobility with opportunistic computation offloading and task replication., Comment: 7 pages, 4 figures, accepted by IEEE Communications Magazine
- Published
- 2019
44. Bidirectional Mission Offloading for Agile Space-Air-Ground Integrated Networks
- Author
-
Guangchao Wang, Sheng Zhou, Shan Zhang, Zhisheng Niu, and Xuemin Sherman Shen
- Subjects
Service (systems architecture) ,business.industry ,Computer science ,Wireless network ,Distributed computing ,Reliability (computer networking) ,media_common.quotation_subject ,020206 networking & telecommunications ,02 engineering and technology ,Space exploration ,Computer Science Applications ,Chaining ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Electrical and Electronic Engineering ,business ,Function (engineering) ,Agile software development ,media_common - Abstract
SAGIN provides great strength in extending the capability of ground wireless networks. On the other hand, with rich spectrum and computing resources, ground networks can also assist spaceair networks in accomplishing resource-intensive or power-hungry missions, enhancing the capability and sustainability of the space-air networks. Therefore, bidirectional mission offloading can make full use of the advantages of SAGIN and benefits both space-air and ground networks. In this article, we identify the key role of network reconfiguration in coordinating heterogeneous resources in SAGIN, and study how network functions virtualization (NFV) and service function chaining (SFC) enable agile mission offloading. A case study validates the performance gain brought by bidirectional mission offloading. Future research issues are outlooked as the bidirectional mission offloading framework opens a new trail in releasing the full potential of SAGIN.
- Published
- 2019
45. Timely Status Update in Wireless Uplinks: Analytical Solutions With Asymptotic Optimality
- Author
-
Sheng Zhou, Xi Zheng, Zhisheng Niu, Zhiyuan Jiang, and Bhaskar Krishnamachari
- Subjects
Stationary distribution ,Computer Networks and Communications ,business.industry ,Computer science ,Network packet ,020302 automobile design & engineering ,020206 networking & telecommunications ,Lyapunov optimization ,02 engineering and technology ,Computer Science Applications ,Scheduling (computing) ,0203 mechanical engineering ,Hardware and Architecture ,Signal Processing ,Telecommunications link ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,business ,Information Systems ,Computer network - Abstract
In a typical Internet of Things (IoT) application where a central controller collects status updates from multiple terminals, e.g., sensors and monitors, through a wireless multiaccess uplink, an important problem is how to attain timely status updates autonomously. In this paper, the timeliness of the status is measured by the recently proposed age-of-information (AoI) metric; both the theoretical and practical aspects of the problem are investigated: we aim to obtain a scheduling policy with minimum AoI and, meanwhile, requires little signaling exchange overhead. Toward this end, we first consider the set of arrival-independent and renewal policies; the optimal policy thereof to minimize the time-average AoI is proved to be a round-robin policy with one-packet (latest packet only and others are dropped) buffers (RR-ONE). The optimality is established based on a generalized Poisson-arrival-see-time-average theorem. It is further proved that RR-ONE is asymptotically optimal among all policies in the massive IoT regime. The AoI steady-state stationary distribution under RR-ONE is also derived. An implementation scheme of RR-ONE is proposed which can accommodate dynamic terminal appearances with little overhead. In addition, considering scenarios where packets cannot be dropped, a Lyapunov optimization-based max-AoI-weight policy is proposed which achieves better performance compared with state-of-the-art.
- Published
- 2019
46. Security Analysis of Mobile Device-to-Device Network Applications
- Author
-
Yu Cheng, Sheng Zhou, Zhisheng Niu, Qing Li, Lin Cai, Wenlong Shen, and Kecheng Liu
- Subjects
Security analysis ,business.product_category ,Exploit ,Computer Networks and Communications ,Computer science ,02 engineering and technology ,Computer security ,computer.software_genre ,law.invention ,Bluetooth ,0203 mechanical engineering ,law ,0202 electrical engineering, electronic engineering, information engineering ,Internet access ,Android (operating system) ,business.industry ,020302 automobile design & engineering ,020206 networking & telecommunications ,Cryptographic protocol ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Internet of Things ,business ,Mobile device ,computer ,Information Systems ,Data transmission - Abstract
Mobile device-to-device (D2D) network has now become a standardized feature in many mobile devices, by which mobile devices can communicate with each other even when commercial Internet access is not available. Because D2D network is expected to be an intrinsic part of the Internet of Things (IoT) and mobile device is the smartest and the most advanced commercial device in everyday usage, the D2D feature and related security protocols it adopts influences the design and implementation of many other IoT devices. While D2D network provides tangible benefits to users, it also raises the security risks of information leaking. This paper presents an in-depth empirical security analysis on mobile D2D network among Android devices. Android apps could establish a mobile D2D network in various ways, including Wi-Fi hotspot, Wi-Fi Direct, and Bluetooth. Those mobile D2D protocols normally take different protection mechanisms, which makes security investigation considerably challenging. In this paper, we focus on most popular apps in the Google Play Store, with aggregated downloads more than 500 million. Our analysis reveals some critical vulnerabilities. The key findings are bi-fold. First, the current mobile D2D network framework enabled by Android has significant flaw of overprivilege issue. Second, we have identified that most data transfer over mobile D2D network is unencrypted. Furthermore, we exploit the identified Android framework flaws to construct three proof-of-concept attacks and we conclude this paper with security lessons and suggestions of possible solutions against the identified security issues.
- Published
- 2019
47. Power Allocation for Multi‐node Energy Harvesting Channels
- Author
-
Zhisheng Niu, Rui Zhang, Chuang Huang, Shuguang Cui, Jie Xu, and Sheng Zhou
- Subjects
business.industry ,Computer science ,Node (networking) ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Data_CODINGANDINFORMATIONTHEORY ,law.invention ,Power (physics) ,Transmission (telecommunications) ,Relay ,law ,business ,Energy harvesting ,Energy (signal processing) ,Relay channel ,Computer Science::Information Theory ,Computer network ,Communication channel - Abstract
This chapter considers the design of wireless communication systems with multiple transmission nodes powered by energy harvesters. Different from the point‐to‐point channel case, it is not optimal anymore for the multi‐node scenarios to separately optimize the transmission of each energy harvesting (EH) communication link with only local energy state information (ESI). The chapter investigates the multiple‐access channels with conferencing links and shared energy harvesters and discusses the transmission cooperation among the transmitters. It considers a three‐node relay channel with energy harvesting (EH) sources, where it is shown that the proposed cooperative transmission scheme can exploit a new form of diversity arising from the independent source and relay energy availability over time, termed "energy diversity," even over time‐invariant channels. The chapter turns to a large relay network, where multiple relays are powered by energy harvesters, and shows that the power allocation for each EH relay should be determined by the overall energy distribution among these relays.
- Published
- 2018
48. Energy HarvestingAd HocNetworks
- Author
-
Jie Xu, Shuguang Cui, Sheng Zhou, Zhisheng Niu, Rui Zhang, and Chuang Huang
- Subjects
Computer science ,Wireless ad hoc network ,business.industry ,Node (networking) ,020206 networking & telecommunications ,02 engineering and technology ,Scheduling (computing) ,Channel state information ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Wireless ,Optimal stopping ,business ,Throughput (business) ,Energy harvesting ,Computer Science::Information Theory ,Computer network - Abstract
This chapter focuses on the ad hoc networks, where multiple energy harvesting (EH) nodes are deployed in a given area and try to utilize the shared wireless channels via opportunistic access. It proposes a distributed opportunistic scheduling (DOS) framework with the two‐stage probing and the save‐then‐transmit energy utilization schemes to fully utilize both the energy state information (ESI) and channel state information (CSI) at each node and employ an optimal stopping framework to solve the expected throughput maximization problem for the considered network. The chapter studies the multiuser gain with the emphasis on energy diversity, where the scaling law of the expected throughput over the number of users is investigated. The access schemes discussed in this chapter capture the "energy‐accumulating" feature in EH communications, and the corresponding throughput scaling sets the guidelines for the future transmission design of EH wireless systems.
- Published
- 2018
49. DeepNap: Data-Driven Base Station Sleeping Operations Through Deep Reinforcement Learning
- Author
-
Sheng Zhou, Jingchu Liu, Bhaskar Krishnamachari, and Zhisheng Niu
- Subjects
Computer Networks and Communications ,business.industry ,Generalization ,Computer science ,Stability (learning theory) ,020302 automobile design & engineering ,020206 networking & telecommunications ,02 engineering and technology ,Energy consumption ,Computer Science Applications ,Data-driven ,Base station ,0203 mechanical engineering ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Artificial intelligence ,business ,Hidden Markov model ,Feature learning ,Information Systems - Abstract
Base station (BS) sleeping is an effective way to reduce the energy consumption of mobile networks. Previous efforts to design sleeping control algorithms mainly rely on stochastic traffic models and analytical derivation. However, the tractability of models often conflicts with the complexity of real-world traffic, making it difficult to apply in reality. In this paper, we propose a data-driven algorithm for dynamic sleeping control called DeepNap. This algorithm uses a deep Q-network (DQN) to learn effective sleeping policies from high-dimensional raw observations or un-quantized systems state vectors. We propose to enhance the original DQN algorithm with action-wise experience replay and adaptive reward scaling to deal with the challenges in nonstationary traffic. We also provide a model-assisted variant of DeepNap through the Dyna framework for inferring and simulating system dynamics. Periodical traffic modeling makes it possible to capture the nonstationarity in real-world traffic and the incorporation with DQN allows for feature learning and generalization from model outputs. Experiments show that both the end-to-end and the model-assisted version of DeepNap outperform table-based ${Q}$ -learning algorithm and the nonstationarity enhancements improve the stability of vanilla DQN.
- Published
- 2018
50. Locally Decodable and Updatable Non-malleable Codes and Their Applications
- Author
-
Dana Dachman-Soled, Elaine Shi, Hong-Sheng Zhou, and Feng-Hao Liu
- Subjects
Theoretical computer science ,Computer science ,business.industry ,Applied Mathematics ,Code word ,Value (computer science) ,Cryptography ,Data_CODINGANDINFORMATIONTHEORY ,0102 computer and information sciences ,Leakage resilience ,01 natural sciences ,Computer Science Applications ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,010201 computation theory & mathematics ,030220 oncology & carcinogenesis ,Relaxation (approximation) ,business ,Private information retrieval ,Security parameter ,Software - Abstract
Non-malleable codes, introduced as a relaxation of error-correcting codes by Dziembowski, Pietrzak and Wichs (ICS ’10), provide the security guarantee that the message contained in a tampered codeword is either the same as the original message or is set to an unrelated value. Various applications of non-malleable codes have been discovered, and one of the most significant applications among these is the connection with tamper-resilient cryptography. There is a large body of work considering security against various classes of tampering functions, as well as non-malleable codes with enhanced features such as leakage resilience.
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.