30,288 results on '"Enhanced Data Rates for GSM Evolution"'
Search Results
2. An adaptive radio link protocol with enhanced data rates for GSM evolution
- Author
-
N. Seshadri, S. Timiri, R. van Nobelen, and J. Whitehead
- Subjects
Network packet ,business.industry ,Computer science ,Radio Link Protocol ,Real-time computing ,Link adaptation ,law.invention ,Redundancy (information theory) ,law ,GSM ,Media Technology ,Enhanced Data Rates for GSM Evolution ,Error detection and correction ,business ,Decoding methods ,Computer network - Abstract
In this article we address the problem of link adaptation in a wireless data system. Link adaptation is necessary in order to match the data rate to time-varying channel and interference conditions. We present a robust radio link protocol (RLP) based on the concept of incremental redundancy (IR). Here, redundant data, for the purpose of error correction, is transmitted only when previously transmitted packets of information are received and acknowledged to be in error. The redundant packet is combined with the previously received (errored) information packets in order to facilitate error correction decoding. If there is a decoding failure, more redundancy is transmitted. It is shown here that an RLP built using the IR concept is more robust and has better throughput than link adaptation schemes using explicit channel measurements such as instantaneous or average signal-to-noise or signal-to-interference ratio. We study the performance of an implementation of a IR-based RLP for EDGE (enhanced data services for GSM evolution) data and demonstrate its superior throughput and robustness properties. The penalty paid for increased robustness and higher throughput is additional receiver memory and higher delay. IR based RLP has already been standardized for IS-136+ packet data and is being actively considered for EDGE standardization.
- Published
- 1999
3. Complementary metal oxide semiconductor class-AB amplifier for global system for mobile communications-enhanced data rates for GSM evolution Tx.
- Author
-
Aniktar, H.
- Subjects
- *
COMPLEMENTARY metal oxide semiconductors , *ELECTRONIC amplifiers , *GSM communications , *BIT rate , *RADIO frequency , *POWER amplifiers , *DC-to-DC converters , *POWER resources , *SIMULATION methods & models - Abstract
In this work a two-stage class-AB complementary metal oxide semiconductor radio frequency (CMOS RF) power amplifier is designed and tested according to global system for mobile communications-enhanced data rates for GSM evolution (GSM-EDGE) requirements. The amplifier efficiency is improved by using a DC-DC converter that lets the power supply voltage track the envelope of the input signal. The amplifier stability is improved by implementing an on-chip ground separation technique. The ground separation technique is based on separating the grounds of the amplifier stages on the chip and thus any parasitic feedback paths are removed. Simulation and experimental results show that the technique makes the amplifier less sensitive to bondwire inductance, and consequently improves the stability and performance. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
4. Edge designs: past, present and future -- Enhanced Data Rates for GSM Evolution technologies keep up with demand
- Author
-
Zvonar, Zoran
- Subjects
GSM ,Wireless technology ,Technology application ,GSM (Global System for Mobile Communications) -- Technology application ,GSM (Global System for Mobile Communications) -- Usage ,Mobile communication systems -- Methods ,Mobile communication systems -- Technology application ,Wireless communication systems -- Methods ,Wireless communication systems -- Technology application ,Data communications -- Analysis - Published
- 2006
5. Complementary Metal Oxide Semiconductor Class-Ab Amplifier For Global System For Mobile Communications-Enhanced Data Rates For Gsm Evolution Tx
- Author
-
H. Aniktar
- Subjects
Power-added efficiency ,Cascade amplifier ,Engineering ,business.industry ,Amplifier ,RF power amplifier ,Electrical engineering ,law.invention ,law ,Hardware_GENERAL ,Operational transconductance amplifier ,Electronic engineering ,Operational amplifier ,Hardware_INTEGRATEDCIRCUITS ,Linear amplifier ,Electrical and Electronic Engineering ,business ,Direct-coupled amplifier - Abstract
In this work a two-stage class-AB complementary metal oxide semiconductor radio frequency (CMOS RF) power amplifier is designed and tested according to global system for mobile communications-enhanced data rates for GSM evolution (GSM-EDGE) requirements. The amplifier efficiency is improved by using a DC–DC converter that lets the power supply voltage track the envelope of the input signal. The amplifier stability is improved by implementing an on-chip ground separation technique. The ground separation technique is based on separating the grounds of the amplifier stages on the chip and thus any parasitic feedback paths are removed. Simulation and experimental results show that the technique makes the amplifier less sensitive to bondwire inductance, and consequently improves the stability and performance.
- Published
- 2011
6. An In-Memory-Computing Charge-Domain Ternary CNN Classifier
- Author
-
Mingtao Zhan, Yongpan Liu, Xiyuan Tang, David Z. Pan, Keren Zhu, Jaydeep P. Kulkarni, Nan Sun, Meizhi Wang, Xiangxing Yang, and Nanshu Lu
- Subjects
Reduction (complexity) ,Artificial neural network ,Edge device ,Computer science ,In-Memory Processing ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Convolutional neural network ,Algorithm ,MNIST database ,Efficient energy use - Abstract
AI edge devices require local intelligence for the concerns of latency and privacy. Given the accuracy and energy constraints, low-power convolutional neural networks (CNNs) are gaining popularity. To alleviate the high memory access energy and computational cost of large CNN models, prior works have proposed promising approaches including in-memory-computing (IMC) [1], mixed-signal multiply-and-accumulate (MAC) calculation [2], and reduced resolution network –[4]. With weights and activations restricted to ±1, binary neural network (BNN) combining with IMC greatly improves the storage and computation efficiency, making it well-suited for edge-based applications, and has demonstrated state-of-the-art energy efficiency in image classification problems [5]. However, compared to full resolution network, BNN requires larger model thus more operations (OPs) per inference for a certain accuracy. To address such challenge, we propose a mixed-signal ternary CNN based processor featuring higher energy efficiency than BNN. It confers several key improvements: 1) the proposed ternary network provides 1.5-b resolution (0/+1/-1), leading to 3.9x OPs/inference reduction than BNN for the same MNIST accuracy; 2) a 1.5b MAC is implemented by V CM -based capacitor switching scheme, which inherently benefits from the reduced signal swing on the capacitive DAC (CDAC); 3) the V CM -based MAC introduces sparsity during training, resulting in lower switching rate. With a complete neural network on chip, the proposed design realizes 97.1% MNIST accuracy with only 0.18uJ per classification, presenting the highest power efficiency for comparable MNIST accuracy.
- Published
- 2023
7. Workload Re-Allocation for Edge Computing With Server Collaboration: A Cooperative Queueing Game Approach
- Author
-
Tong Zhang, Bing Chen, Qiang Wu, Jun Cai, Changyan Yi, and Kun Zhu
- Subjects
Queueing theory ,Computer Networks and Communications ,Computer science ,business.industry ,020206 networking & telecommunications ,Workload ,02 engineering and technology ,Energy consumption ,Core (game theory) ,Server ,Convex optimization ,0202 electrical engineering, electronic engineering, information engineering ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software ,Edge computing ,Computer network - Abstract
In this paper, a long-term workload management problem for multi-server edge computing with server collaboration is studied. In the considered model, mobile users computation-intensive tasks are generated dynamically over the time and offloaded to associated edge servers according to pre-determined subscription agreements. Upon receiving the subscribed workload, each edge server can then decide to whether participate in server collaboration for enabling workload re-allocation (i.e., workload exchange) with other heterogeneously configured edge servers. Unlike most of the existing work, this paper takes into account both competitions and collaborations among strategic edge servers in sharing their computing capacities. To achieve the equilibrium for each edge server in minimizing its expected cost (including energy consumption, delay, transmission, configuration and pricing costs), a joint optimization is formulated for determining i) its amount of workload to undertake, ii) compensation price charged from peers, and iii) computing speed to adopt. To efficiently solve this problem, we propose a novel cooperative queueing game approach, which integrates a convex optimization, a core cost sharing scheme and a mapping rule. Theoretical analyses and extensive simulations are conducted to evaluate the performance of the proposed solution, and demonstrate its superiority over counterparts.
- Published
- 2023
8. Dynamic Reservation of Edge Servers via Deep Reinforcement Learning for Connected Vehicles
- Author
-
Xudong Wang, Jiawei Zhang, Yifei Zhu, and Suhong Chen
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Feature extraction ,Reservation ,Server ,Logic gate ,Task analysis ,Reinforcement learning ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software ,Edge computing ,Computer network - Published
- 2023
9. A Distributed Deep Reinforcement Learning Technique for Application Placement in Edge and Fog Computing Environments
- Author
-
Mohammad Goudarzi, Marimuthu Palaniswami, and Rajkumar Buyya
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Networks and Communications ,Computer science ,Distributed computing ,Testbed ,Network topology ,Machine Learning (cs.LG) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Server ,Convergence (routing) ,Trajectory ,Reinforcement learning ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Software ,Edge computing - Abstract
Fog/Edge computing is a novel computing paradigm supporting resource-constrained Internet of Things (IoT) devices by the placement of their tasks on the edge and/or cloud servers. Recently, several Deep Reinforcement Learning (DRL)-based placement techniques have been proposed in fog/edge computing environments, which are only suitable for centralized setups. The training of well-performed DRL agents requires manifold training data while obtaining training data is costly. Hence, these centralized DRL-based techniques lack generalizability and quick adaptability, thus failing to efficiently tackle application placement problems. Moreover, many IoT applications are modeled as Directed Acyclic Graphs (DAGs) with diverse topologies. Satisfying dependencies of DAG-based IoT applications incur additional constraints and increase the complexity of placement problems. To overcome these challenges, we propose an actor-critic-based distributed application placement technique, working based on the IMPortance weighted Actor-Learner Architectures (IMPALA). IMPALA is known for efficient distributed experience trajectory generation that significantly reduces the exploration costs of agents. Besides, it uses an adaptive off-policy correction method for faster convergence to optimal solutions. Our technique uses recurrent layers to capture temporal behaviors of input data and a replay buffer to improve the sample efficiency. The performance results, obtained from simulation and testbed experiments, demonstrate that our technique significantly improves the execution cost of IoT applications up to 30\% compared to its counterparts., Comment: This Paper is accepted in IEEE Transactions on Mobile Computing (TMC), on 23 October 2021
- Published
- 2023
10. Publishing Graphs Under Node Differential Privacy
- Author
-
Lei Chen, Xun Jian, and Yue Wang
- Subjects
Theoretical computer science ,business.industry ,Computer science ,Node (networking) ,Graph query ,Privacy protection ,Graph ,Computer Science Applications ,Computational Theory and Mathematics ,Publishing ,Differential privacy ,Enhanced Data Rates for GSM Evolution ,business ,Information Systems ,De facto standard - Abstract
Differential privacy (DP) has become the de facto standard of privacy protection. For graphs, there are two widely used definitions of differential privacy, namely, edge differential privacy (edge-DP) and node differential privacy (node-DP), and node-DP is preferred when the minimal unit of interest is a node. To preserve node-DP, one can develop different methods to answer each specific graph query, or develop a graph publishing method to answer all graph queries. However, no existing works worked on such graph publishing methods. In this work, we propose two methods for publishing graphs under node-DP. One is the node-level perturbation algorithm which modifies the input graph by randomly inserting and removing nodes. The other one is the edge-level perturbation algorithm which randomly removing edges and inserting nodes. Both methods can achieve a flexible privacy guarantee by adjusting the running parameters. We conduct extensive experiments on both real-world and synthetic graphs to show the effectiveness and efficiency of proposed algorithms.
- Published
- 2023
11. Dynamic Task Scheduling in Cloud-Assisted Mobile Edge Computing
- Author
-
Shangguang Wang, Ao Zhou, Qing Li, Shan Zhang, Xiao Ma, and Alex X. Liu
- Subjects
Mobile edge computing ,Job shop scheduling ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Lyapunov optimization ,Cloud computing ,Airfield traffic pattern ,Scheduling (computing) ,Task (project management) ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software - Abstract
The cloud-assisted mobile edge computing system is a critical architecture to process computation-intensive and delay-sensitive mobile applications in close proximity to mobile users with high resource efficiency. Due to the heterogenous dynamics of task arrivals at edge nodes and the distributed nature of the system, the workloads of edge nodes are prone to be unbalanced, which can cause high task response time and resource cost. This paper solves the dynamic task scheduling problem in cloud-assisted mobile edge computing (including both peer task scheduling among edge nodes and cross-layer task scheduling from edge nodes to the cloud), aiming at minimizing average task response time within resource budget limit. To overcome the challenges of task arrival dynamics, edge node heterogeneity, and computation-communication delay tradeoff, we propose a Water-filling Based Dynamic Task Scheduling (WiDaS) algorithm. WiDaS dynamically tunes the usage of cloud resources based on the Lyapunov optimization method and efficiently schedules mobile tasks among edge nodes (and the cloud) by exploiting the idea of water filling. Extensive simulations are conducted to evaluate WiDaS under a trace-driven traffic pattern and two mathematic traffic patterns. The results demonstrate that WiDaS shows two-fold benefits of efficiency and effectiveness.
- Published
- 2023
12. OL-EUA: Online User Allocation for NOMA-Based Mobile Edge Computing
- Author
-
Guangming Cui, Qiang He, Feifei Chen, Hai Jin, Fang Dong, Xiaoyu Xia, and Yun Yang
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Transmitter power output ,medicine.disease ,Noma ,Server ,Cellular network ,medicine ,Key (cryptography) ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software ,5G ,Computer network - Abstract
In recent years, mobile edge computing (MEC), as a key technology that facilitates the 5G mobile network, has raised a number of new challenges for app vendors, including the Edge User Allocation (EUA) problem. EUA aims to allocate as many app users as possible in an MEC system to minimum edge servers in the system. In non-orthogonal multiple access (NOMA)-based MEC system, multiple app users can be allocated to the same subchannel on an edge server through transmit power allocation based on their intra-cell and inter-cell interference. However, allocating excessive app users to the same subchannel may result in severe interference and consequently impact app users data rates. In addition, in an MEC system, app users join and depart randomly, and thus need to be allocated in an online manner. Existing EUA approaches suffer from poor performance in dynamic real-world NOMA-based MEC systems because they allocate app users in an offline manner and do not consider the complication caused by NOMA. In this paper, we propose OL-EUA, an online approach for solving dynamic EUA problem in NOMA-based MEC systems. Its performance is theoretically analyzed and experimentally evaluated against a baseline approach and two state-of-the-art approaches on a widely-used real-world dataset.
- Published
- 2023
13. Stochastic Digital-Twin Service Demand With Edge Response: An Incentive-Based Congestion Control Approach
- Author
-
Xi Lin, Jun Wu, Mohsen Guizani, Jianhua Li, and Wu Yang
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Stochastic process ,Network congestion ,Vehicle dynamics ,Incentive ,Task analysis ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software ,Computer network ,Service demand - Published
- 2023
14. Latency Minimization for Mobile Edge Computing Networks
- Author
-
Vaneet Aggarwal, Chang-Lin Chen, and Christopher G. Brinton
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Server ,Resource allocation ,Cache ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Latency (engineering) ,Online algorithm ,business ,Mobile device ,Software ,Computer network - Abstract
The proliferation of data-intensive mobile applications is causing latency to become an issue in mobile edge computing (MEC) systems. In this work, we propose a novel methodology that optimizes communication, computation, and caching configurations in MEC to minimize the mean latency experienced by mobile devices. Transmission and computation processes are modeled using M/G/1 queues to account for service rates and warm-up times. Our caching scheme includes time variables for each file at each edge server in determining when to discard files from storage. We theoretically analyze the latency experienced by mobile devices due to communication, computation, and caching, showing how MEC system latency depends on the offloading decisions of mobile devices, bandwidth and CPU resources, and expiration times of files in the storage of edge servers. Our method for solving the latency minimization problem consists of two main components: iNner cOnVex Approximation (NOVA) to deal with non-convexity in the optimization, and an online algorithm for preventing cache storage violations as new tasks arrive and are serviced by the MEC system. Simulation results show that our algorithm outperforms several baselines in minimizing latency, and verify the benefit of including different resource allocation variables in our optimization.
- Published
- 2023
15. An adaptive radio link protocol with enhanced data rates for GSM evolution
- Author
-
van Nobelen, R., primary, Seshadri, N., additional, Whitehead, J., additional, and Timiri, S., additional
- Published
- 1999
- Full Text
- View/download PDF
16. Camouflaged people detection based on a semi-supervised search identification network
- Author
-
Yang Liu, Cong-qing Wang, and Yong-jun Zhou
- Subjects
Training set ,business.industry ,Computer science ,Mechanical Engineering ,Metals and Alloys ,Computational Mechanics ,Object detection ,Identification (information) ,Camouflage ,Ceramics and Composites ,Computer vision ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,business - Abstract
Automated detection of military people based on the images in different environments plays an important role in accurately completing military missions. With the equipment gradually moving towards intelligence, unmanned aerial vehicles (UAVs) will be widely used for integrated reconnaissance/attack in the future. The lightweight and compact design of the small UAV allows it to travel through dense forests and other environments to capture images with its convenient mobility. However, as the camouflage has been designed to blend in with surroundings, which greatly reduces the probability of the target being discovered. Moreover, the lack of training data on camouflaged people detection will inhibit the training of a deep model. To address these problems, a novel semi-supervised camouflaged military people detection network is proposed to automatically detect the target from the images. In this paper, the camouflaged object detection dataset (COD10K) is first supplemented according to our mission requirements, then the edge attention is utilized to enhance the boundaries based on search identification network. Further, a semi-supervised learning strategy is presented to take advantage of the unlabeled data which can alleviate insufficient data and improve the detection accuracy. Experiments demonstrate that the proposed semi-supervised search identification network (Semi-SINet) performs well in camouflaged people detection compared with other object detection methods.
- Published
- 2023
17. Directed-Graph-Learning-Based Diagnosis of Multiple Faults for High Speed Train With Switched Dynamics
- Author
-
Kunpeng Zhang, Hui Yang, Bin Jiang, and Fuyang Chen
- Subjects
Lyapunov function ,Computer science ,Estimator ,Topology (electrical circuits) ,Directed graph ,Fault (power engineering) ,Computer Science Applications ,Power (physics) ,Human-Computer Interaction ,symbols.namesake ,Control and Systems Engineering ,Control theory ,Asynchronous communication ,symbols ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Computer Science::Distributed, Parallel, and Cluster Computing ,Software ,Information Systems - Abstract
This article addresses the distributed multiple fault isolation, modeling, and the closed-loop fault estimation under asynchronous switching for high speed train (HST) with switched dynamics, which is composed of traction, coasting, and braking. First, directed-graph-quantum-learning-based multiple-agent system (MAS) classifiers are introduced to characterize the joints effects of multiple faults. Some sufficient conditions are derived under the condition that the multiple fault topology contains a directed spanning tree and cycle edge, and these conditions guarantee that the multiple fault isolation problem can be solved under randomized learning techniques. Then, single-integrator agents are employed to capture the time-varying topology of multiple fault modeling, in which edge agreement and persistence condition are used to guarantee asymptotic consensus. After that, a novel robust fault estimation design along with the switched Lyapunov function and average dwell time is proposed for the possible power actuator faults subject to asynchronous switching and electromagnetic interferences. In addition, switched estimators are designed such that the closed-loop system is asymptotically stable. A multiple fault isolation and estimation case is investigated to validate the application of this methodology.
- Published
- 2023
18. Privacy-Preserving Microservices in Industrial Internet-of-Things-Driven Smart Applications
- Author
-
Neda Bugshan, Mohammad Saidur Rahman, Nour Moustafa, and Ibrahim Khalil
- Subjects
Service quality ,Radial basis function network ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Cloud computing ,Microservices ,Computer Science Applications ,Hardware and Architecture ,Analytics ,Signal Processing ,Differential privacy ,Data Protection Act 1998 ,Enhanced Data Rates for GSM Evolution ,business ,Information Systems - Abstract
Machine Learning (ML) algorithms can effectively perform analytics and inferences for building smart applications, such as early detection of diseases in the Industrial Internet of Things (IIoT) and smart healthcare systems. The main components of ML, including training and testing phases, can be decomposed into microservices to improve service quality, along with fast implementation and integration with the edge and cloud services. However, the execution of ML in an edge-cloud environment introduces privacy risks to data owners (e.g., patients). In this paper, we present a privacy-preserving machine learning (ML) framework by leveraging microservice technology for safeguarding healthcare IIoT systems. More specifically, we develop a microservice-based distributed privacy-preserving technique using Differential Privacy (DP) and Radial Basis Function Network (RBFN) to balance between privacy protection and model performance in edge networks. We conduct extensive experiments to evaluate the performance of the proposed technique. The results revealed that DP has a significant influence on the model’s performance and achieves more than 90% accuracy with an epsilon value over 0.4, enhancing data protection and analytics through the implementation of microservices.
- Published
- 2023
19. MR-DRO: A Fast and Efficient Task Offloading Algorithm in Heterogeneous Edge/Cloud Computing Environments
- Author
-
Ruidong Li, Huaming Wu, Nianfu Wang, Ziru Zhang, and Chaogang Tang
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Heuristic (computer science) ,Cloud computing ,Computer Science Applications ,Task (computing) ,Software portability ,Hardware and Architecture ,Signal Processing ,Reinforcement learning ,Enhanced Data Rates for GSM Evolution ,business ,Algorithm ,Mobile device ,Information Systems - Abstract
With the rapid development of Internet of Things (IoT) and next-generation communication technologies, resource-constrained mobile devices fail to meet the demand of resource-hungry and compute-intensive applications. To cope with this challenge, with the assistance of Mobile Edge Computing (MEC), offloading complex tasks from mobile devices to edge cloud servers or central cloud servers can reduce the computational burden of devices and improve the efficiency of task processing. However, it is difficult to obtain optimal offloading decisions by conventional heuristic optimization methods, because the decision-making problem is usually NP-hard. In addition, there are shortcomings in using intelligent decision-making methods, e.g., lack of training samples and poor ability of migration under different MEC environments. To this end, we propose a novel offloading algorithm named MR-DRO, consisting of a Meta-Reinforcement Learning (meta-RL) model, which improves the migration ability of the whole model, and a Deep Reinforcement Learning (DRL) model, which combines multiple parallel Deep Neural Networks (DNNs) to learn from historical task offloading scenarios. Simulation results demonstrate that our approach can effectively and efficiently generate near-optimal offloading decisions in IoT environments with edge and cloud collaboration, which further improves the computational performance and has strong portability when making offloading decisions.
- Published
- 2023
20. AI-Enabled IIoT for Live Smart City Event Monitoring
- Author
-
Nabil Alrajeh, Abdur Rahman, Ahmed Ghoneim, Ahmed J. Showail, and M. Shamim Hossain
- Subjects
Event monitoring ,Computer Networks and Communications ,business.industry ,Computer science ,Cloud computing ,Crowdsourcing ,Data science ,Computer Science Applications ,Hardware and Architecture ,Analytics ,Smart city ,Signal Processing ,Key (cryptography) ,Enhanced Data Rates for GSM Evolution ,business ,Internet of Things ,Information Systems - Abstract
Recent advancements of Industrial IoT (IIoT) have revolutionized modern urbanization and smart cities. While IIoT data contains rich events and objects of interest, processing a massive amount of IIoT data and making predictions in real-time is challenging. Recent advancements in AI allow processing such a massive amount of IIoT data and generating insights for further decision-making processes. In this paper, we propose several key aspects of AI-enabled IIoT data for smart city monitoring. Firstly, we have combined a human-intelligence-enabled crowdsourcing application with that of an AI-enabled IIoT framework to capture events and objects from IIoT data in real-time. Secondly, we have combined multiple AI algorithms that can run on distributed edge and cloud nodes to automatically categorize the captured events and objects and generate analytics, reports, and alerts from the IIoT data in real-time. The results can be utilized in two scenarios. In the first scenario, the smart city authority can authenticate the AI-processed events and assign these events to the appropriate authority for managing the events. In the second scenario, the AI algorithms are allowed to interact with humans or IIoT for further processes. Finally, we will present the implementation details of the scenarios mentioned above and the test results. The test results show that the framework has the potential to be deployed within a smart city.
- Published
- 2023
21. An Adaptive Mechanism for Dynamically Collaborative Computing Power and Task Scheduling in Edge Environment
- Author
-
Zhihui Lu, Xin Du, Patrick C. K. Hung, Jie Wu, Lulu Chen, and Yangchuan Xu
- Subjects
Schedule ,Optimization problem ,Computer Networks and Communications ,Computer science ,business.industry ,Service provider ,Computer Science Applications ,Task (project management) ,Scheduling (computing) ,Hardware and Architecture ,Signal Processing ,Enhanced Data Rates for GSM Evolution ,Cache ,business ,Edge computing ,Information Systems ,Computer network - Abstract
Edge computing can provide high bandwidth and low-latency service for big data tasks by leveraging the edge side’s computing, storage, and network resources. With the development of microservice and docker technology, service providers can flexibly and dynamically cache microservice at the edge side to respond efficiently with limited resources. Automatically caching needed services on the nearest edge nodes and dynamically scheduling users’ requests can realize that computing power and software services flow with the users to provide continuous services. However, achieving the goal needs to overcome many challenges, such as the significant fluctuation of user devices’ requests at the edge side and the lack of collaboration among edge nodes. In this paper, dynamic computing power scheduling and collaborative task scheduling among edge nodes are comprehensively developed. The problem is considered a multi-objective optimization problem, including sequentially minimizing the deadline missing rate of requests and the average task completion time. We propose an adaptive mechanism for dynamically collaborative computing power and task scheduling (ADCS) in the edge environment to solve this problem. It adopts the greedy decision method to schedule computing tasks to meet their deadline requirements. At the same time, it uses the best-fit method to adjust the computing resources according to the changes of users’ requests. The simulation results show that ADCS can decrease the deadline missing rate and reduce the average completion time. Compared with DSR and CoDSR, the deadline missing rate is reduced by 59.91% and 19.95%, respectively. The average completion time is decreased by 37.87%, 6.71%.
- Published
- 2023
22. Intelligent Intrusion Detection for Internet of Things Security: A Deep Convolutional Generative Adversarial Network-Enabled Approach
- Author
-
Yixuan Wu, Zhaolong Ning, Shupeng Wang, Shengtao Li, and Laisen Nie
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Feature extraction ,Big data ,Feature selection ,Intrusion detection system ,Convolutional neural network ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Enhanced Data Rates for GSM Evolution ,Latency (engineering) ,business ,Edge computing ,Information Systems - Abstract
With the rapid advance of Internet of Things (IoTs), it is difficult for cloud-centric computing to meet the requirements of low latency and ease of use. As an open and distributed system, edge computing integrates computing, networking, storage, and applications. It provides intelligent services on the edge of an IoT. The edge network is comprised of various wireless and wired networks, and the computing and storage resources of edge nodes are limited. These conditions make the edge network expose to a variety of cyber attacks. Additionally, it is difficult for an IoT edge node to support large-scale network data collection and detection for IoT security. Although big data-enabled intrusion detection algorithms can ensure the high accuracy of intrusion detection systems, it is stressful for resource-limited edge nodes to implement those algorithms in IoTs. Motivated by these challenges, we propose an intelligent intrusion detection algorithm implemented by big data mining based on fuzzy rough set, Generative Adversarial Network (GAN), and Convolutional Neural Network (CNN). In our method, we first propose a fuzzy rough set-based algorithm to perform feature selection for big data via IoTs. Then, we take advantage of the efficient feature extraction capabilities of CNN for implementing intrusion detection based on selected features. Furthermore, after combining CNN and GAN, we propose an intelligent algorithm to realize intrusion detection in a variety of scenarios. Finally, the proposed method is compared with existing methods for evaluation. Simulation results show that our method has up to 4% higher accuracy than existing methods.
- Published
- 2023
23. Adaptive and Priority-Based Resource Allocation for Efficient Resources Utilization in Mobile-Edge Computing
- Author
-
Mamoun Alazab, Imran Razzak, Low Tang Jung, and Zubair Sharif
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Cloud computing ,Computer Science Applications ,Task (computing) ,Resource (project management) ,Hardware and Architecture ,Signal Processing ,Resource allocation ,Resource management ,Enhanced Data Rates for GSM Evolution ,business ,Edge computing ,Information Systems - Abstract
Edge computing (EC) offers cloud-like services at the edge of mobile networks to satisfy the delay-sensitive and rapid computation applications in meeting the demands of rapidly increasing mobile devices and other IoTs. EC is known to be constrained with limited resources that its efficacy greatly depends on an effective and efficient resource allocation to provide optimal resources utilization. Focusing on the fact, this paper presents an adaptive resource allocation mechanism, abbreviated as A-PBRA, for effective resources utilization in the EC paradigm. To realize optimal utilization, the available resources are allocated dynamically (adaptability) by considering the nature of the incoming requests. The proposed scheme shall adapt to the resource demands and priorities of the incoming requests. After identifying the received request which can be either priority-based or normal request, each of them is processed with three possibilities. The available resources are thus allocated as per the priorities of the incoming requests to satisfy the constraints accordingly. The proposed mechanism is adaptable to a maximum number of incoming requests along with optimizing the utilization of limited resources at the edge node. Extensive simulations were performed through ifogsim to evaluate the performance of the proposed method. Critical comparisons were made against closely related algorithms and techniques i.e., the NBIHA and the CORA-GT. The simulation results from the proposed scheme optimistically showing that it performed better in terms of resources utilization, average response time, task execution time, and energy consumption.
- Published
- 2023
24. Joint Management of Compute and Radio Resources in Mobile Edge Computing: A Market Equilibrium Approach
- Author
-
Eugenio Moro and Ilario Filippini
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Mobile edge computing ,Access network ,Market Model ,Computer Networks and Communications ,business.industry ,Computer science ,Resource management ,Distributed computing ,Computational modeling ,Cloud computing ,Service provider ,Resource Allocation ,Computer Science - Networking and Internet Architecture ,Mobile Edge Computing ,Game Theory ,Network Slicing ,Resource allocation ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software ,Edge computing - Abstract
Edge computing has been recently introduced as a way to bring computational capabilities closer to end users of modern network-based services, in order to support existent and future delay-sensitive applications by effectively addressing the high propagation delay issue that affects cloud computing. However, the problem of efficiently and fairly manage the system resources presents particular challenges due to the limited capacity of both edge nodes and wireless access networks, as well as the heterogeneity of resources and services' requirements. To this end, we propose a techno-economic market where service providers act as buyers, securing both radio and computing resources for the execution of their associated end users' jobs, while being constrained by a budget limit. We design an allocation mechanism that employs convex programming in order to find the unique market equilibrium point that maximizes fairness, while making sure that all buyers receive their preferred resource bundle. Additionally, we derive theoretical properties that confirm how the market equilibrium approach strikes a balance between fairness and efficiency. We also propose alternative allocation mechanisms and give a comparison with the market-based mechanism. Finally, we conduct simulations in order to numerically analyze and compare the performance of the mechanisms and confirm the theoretical properties of the market model., Comment: Corrected typos and figure orientation. in IEEE Transactions on Mobile Computing
- Published
- 2023
25. Service Deployment Strategy for Predictive Analysis of FinTech IoT Applications in Edge Networks
- Author
-
Mohammad Ayoub Khan, Mainak Adhikari, Venki Balasubramanian, Ambigavathi Munusamy, Varun G. Menon, Satish Narayana Srirama, and Danda B. Rawat
- Subjects
Service (systems architecture) ,Computer Networks and Communications ,business.industry ,Computer science ,Distributed computing ,Computer Science Applications ,FinTech ,Support vector machine ,Task (computing) ,Hardware and Architecture ,Software deployment ,Signal Processing ,Customer satisfaction ,Enhanced Data Rates for GSM Evolution ,business ,Baseline (configuration management) ,Information Systems - Abstract
The seamless integration of sensors and smart communication technologies has led to the development of various supporting systems for Financial Technology (FinTech). The emergence of the Next-Generation Internet of Things (Nx-IoT) for FinTech applications enhances the customer satisfaction ratio. The main research challenge for FinTech applications is to analyse the incoming tasks at the edge of the networks with minimum delay and power consumption while increasing the prediction accuracy. Motivated by the above-mentioned challenge, in this paper, we develop a ranked-based service deployment strategy and an Artificial Intelligence technique for financial data analysis at edge networks. Initially, a risk-based task classification strategy has been developed for classifying the incoming financial tasks and providing the importance to the risk-based task for meeting users’ satisfaction ratio. Besides that, an efficient service deployment strategy is developed using Hall’s theorem to assign the ranked-based financial data to the suitable edge or cloud servers with minimum delay and power consumption. Finally, the standard support vector machines (SVM) algorithm is used at edge networks for analysing the financial data with higher accuracy. The experimental results demonstrate the effectiveness of the proposed strategy and SVM model at edge networks over the baseline algorithms and classification models, respectively.
- Published
- 2023
26. Incentive-Driven Proactive Application Deployment and Pricing on Distributed Edges
- Author
-
Albert Y. Zomaya, Yishan Chen, Gong Chen, Jianwei Yin, Shouling Ji, and Shuiguang Deng
- Subjects
Service (systems architecture) ,Computer Networks and Communications ,business.industry ,Computer science ,Mobile computing ,Low latency (capital markets) ,Incentive ,Software deployment ,Server ,Stackelberg competition ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software ,Computer network - Abstract
Applications deployed on edge servers improve users experience, when compared to deployments on cloud servers. Existing works usually assume that a central scheduler helps in making decisions, but they are often inefficient, inaccurate, or time-consuming. In this paper, we present a proactive application deployment system, which consists of three modules (i.e., incentive, profit, and latency). Based on the architecture of a fully distributed edge network, our system includes SELL, a Spontaneous Edge depLoyment aLgorithm in the incentive module. SELL lets edge servers compete with each other in a two-stage Stackelberg game to win deployment rights, and the winners get paid for their deployment efforts. The other two modules help recursively adjust service prices and deployment intentions in view of their own profits. Simulations on the proactive edge application deployment system demonstrate that SELL can help an application provider find appropriate edge servers to deploy applications while maximizing the profits for both parties in a low latency.
- Published
- 2023
27. Adaptive Asynchronous Federated Learning in Resource-Constrained Edge Computing
- Author
-
Hongli Xu, Chen Qian, Jianchun Liu, Yang Xu, He Huang, Jinyang Huang, and Lun Wang
- Subjects
Resource (project management) ,Computer Networks and Communications ,Asynchronous communication ,Computer science ,Distributed computing ,Server ,Reinforcement learning ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Software ,Edge computing ,Data modeling ,Task (project management) - Abstract
Federated learning (FL) has been widely adopted to train machine learning models over massive data in edge computing. However, machine learning faces critical challenges, \eg, data imbalance, edge dynamics, and resource constraints, in edge computing.The existing FL solutions cannot well cope with data imbalance or edge dynamics, and may cause high resource cost. In this paper, we propose an adaptive asynchronous federated learning (AAFL) mechanism. To deal with edge dynamics, the parameter server will aggregate local updated models only from a certain fraction $\alpha$ of all edge nodes in each epoch. Moreover, the system can intelligently vary the number of local updated models for global model aggregation in different epochs with network situations. We then propose experience-driven algorithms based on deep reinforcement learning (DRL) to adaptively determine the optimal value of $\alpha$ in each epoch for two cases of AAFL, single learning task and multiple learning tasks, so as to achieve less completion time of training under resource constraints. Extensive experiments on the classical models and datasets show high effectiveness of the proposed algorithms. Specifically, AAFL can reduce the completion time by about 55\% and improve the learning accuracy by 18\% under resource constraints, compared with the state-of-the-art solutions.
- Published
- 2023
28. Joint Optimization Across Timescales: Resource Placement and Task Dispatching in Edge Clouds
- Author
-
Xinliang Wei, Yu Wang, Dazhao Cheng, and A B M Mohaimenur Rahman
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Cloud computing ,Computer Science Applications ,Task (computing) ,Resource (project management) ,Hardware and Architecture ,Server ,Reinforcement learning ,Enhanced Data Rates for GSM Evolution ,business ,Software ,Edge computing ,Information Systems - Abstract
The proliferation of Internet of Things (IoT) data and innovative mobile services has promoted an increasing need for low-latency access to resources such as data and computing services. Mobile edge computing has become an effective computing paradigm to meet the requirement for low-latency access by placing resources and dispatching tasks at the edge clouds near mobile users. The key challenge of such solution is how to efficiently place resources and dispatch tasks in the edge clouds to meet the QoS of mobile users or maximize the platform's utility. In this paper, we study the joint optimization problem of resource placement and task dispatching in mobile edge clouds across multiple timescales under the dynamic status of edge servers. We first propose a two-stage iterative algorithm to solve the joint optimization problem in different timescales, which can handle the varieties among the dynamic of edge resources and/or tasks. We then propose a reinforcement learning (RL) based algorithm which leverages the learning capability of Deep Deterministic Policy Gradient (DDPG) technique to tackle the network variation and dynamic as well. The results from our trace-driven simulations demonstrate that both proposed approaches can effectively place resources and dispatching tasks across two timescales to maximize the total utility of all scheduled tasks.
- Published
- 2023
29. Edge-Assisted Short Video Sharing With Guaranteed Quality-of-Experience
- Author
-
Peng Li, Fahao Chen, Song Guo, and Deze Zeng
- Subjects
Multimedia ,Computer Networks and Communications ,business.industry ,Computer science ,media_common.quotation_subject ,Cloud computing ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,Server ,The Internet ,Quality (business) ,Enhanced Data Rates for GSM Evolution ,Cache ,Quality of experience ,Online algorithm ,business ,computer ,Software ,Information Systems ,media_common - Abstract
As a rising star of social apps, short video apps, e.g., TikTok, have attracted a large number of mobile users by providing fresh and short video contents that highly match their watching preferences. Meanwhile, the booming growth of short video apps imposes new technical challenges on the existing computation and communication infrastructure. Traditional solutions maintain all videos on the cloud and stream them to users via contend delivery networks or the Internet. However, they incur huge network traffic and long delay that seriously affect users' watching experiences. In this paper, we propose an edge-assisted short video sharing framework to address these challenges by caching some videos highly preferred by users at edge servers that can be accessed by users via high-speed network connections. Since edge servers have limited computation and storage resources, we design an online algorithm with provable approximation ratio to decide which videos should be cached at edge servers, without the knowledge of future network quality and watching preferences changes. Furthermore, we improve the performance by jointly considering video fetching and user-edge association. Extensive simulations are conducted to evaluate the proposed algorithms under various system settings, and the results show that our proposals outperform existing schemes.
- Published
- 2023
30. Learning-Based Edge Sensing and Control Co-Design for Industrial Cyber–Physical System
- Author
-
Cailian Chen, Xinping Guan, Shanying Zhu, Jianping He, and Zhiduo Ji
- Subjects
Control and Systems Engineering ,Computer science ,Distributed computing ,Control (management) ,Key (cryptography) ,Cyber-physical system ,Deep integration ,Enhanced Data Rates for GSM Evolution ,State (computer science) ,Electrical and Electronic Engineering ,Bridge (nautical) ,Edge computing - Abstract
The new generation of edge computing supported industrial cyber-physical system (ICPS) promotes the deep integration of sensing and control. The unknown model is one of the key challenges to characterize their interactions. In most existing works, many efforts have been devoted to overcoming the challenge for the single aspect of sensing and control. However, the industrial revolution puts forward the higher requirements of the overall production performance. To solve this problem, we propose a novel framework for learning-based edge sensing and control co-design. Specifically, the model learning error is first analyzed to bound the actual control performance. Then, the bound is further linked to the sensing design through the bridge of relaxed assumptions of the nonzero initial state and unknown order. Besides, the
- Published
- 2023
31. Design, development of MQ TELEMETRY TRANSPORT-sensor network interface protocol for the sensor device and IoT edge gateway
- Author
-
J.B. Seventline and M. Obula Reddy
- Subjects
010302 applied physics ,business.industry ,Computer science ,02 engineering and technology ,General Medicine ,021001 nanoscience & nanotechnology ,01 natural sciences ,Signaling protocol ,Default gateway ,Server ,Telemetry ,0103 physical sciences ,Bandwidth (computing) ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Enhanced Data Rates for GSM Evolution ,0210 nano-technology ,business ,Wireless sensor network ,Protocol (object-oriented programming) ,Computer network - Abstract
The Internet of things is gaining a lot of interestfrom day to day forvarious applications like Smart cities, Industrial health monitoring, smart homes, etc. IoT systems consist of sensor devices with communication capabilities, actuators, edgegateways, Back end Servers, and IoT applications. The Ultimate goal of an IoT network is to collect sensor data, reliable Transfer sensor data to the backend serverthrough the edgegateways. Back end server delivers the sensor data to the IoT Applications. IoT application process the Sensor data and apply the appropriate decision. Reliable Transfer of sensor data tothe Backend server isessential considering wireless sensor network characteristics, low computation power, and sensor devices' bandwidth. The IoT network deployment signaling protocol plays a crucial role in communicating sensor data to the IoT program. Various application messaging protocols are currently available, like MQ TELEMETRY TRANSPORT, COAP, HTTPetc. These protocols aredesigned, developed considering richcomputation powerand bandwidth. These protocolsare notenergy efficientfor sensor devices. MQ TELEMETRY TRANSPORT-SN protocol specified for sensor devices considering wireless sensor networks. In this article, Message Telemetry Transfer Protocol-sensor networks were planned, built, and validated according to the MQ TELEMETRY TRANSPORT-SN specification.
- Published
- 2023
32. Infrastructure-efficient Virtual-Machine Placement and Workload Assignment in Cooperative Edge-Cloud Computing Over Backhaul Networks
- Author
-
Biswanath Mukherjee, Abhishek Gupta, Massimo Tornatore, Yajie Li, Wei Wang, Yongli Zhao, Haoran Chen, and Jie Zhang
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Heuristic (computer science) ,Distributed computing ,Workload ,Context (language use) ,Cloud computing ,computer.software_genre ,Computer Science Applications ,Backhaul (telecommunications) ,Hardware and Architecture ,Virtual machine ,Enhanced Data Rates for GSM Evolution ,business ,computer ,Software ,Edge computing ,Information Systems - Abstract
Edge computing provides computing capability at close-user proximity to reduce service latency for end users. To improve the efficiency of edge computing infrastructures, geographically-distributed edge datacenters can co-work with each other and with cloud datacenters, forming a new paradigm referred to as cooperative edge-cloud computing. In this context, applications typically run on a virtual machine (VM) that can be replicated at multiple sites, and thus user traffic can be served at all the sites where corresponding VMs reside. For the performance of many applications, latency is a critical parameter. In this work, taking applications latencies as the primary constraint, we model the problem of VM placement and workload assignment as a mixed integer linear program and develop heuristic algorithms accordingly. The goal is to minimize the consumption of information technology (IT) infrastructures for placing VMs in cooperative edge-cloud computing, while meeting the heterogeneous latency demands of different applications. Some preliminary results indicate that edge datacenters resource efficiency can be optimized by proper cross-site VM placement and workload re-direction.
- Published
- 2023
33. Latency-Aware Strategies for Deploying Data Stream Processing Applications on Large Cloud-Edge Infrastructure
- Author
-
Laurent Lefèvre, Alexandre da Silva Veith, Marcos Dias De Assuncao, Department of Computer Science [University of Toronto] (DCS), University of Toronto, Ecole de Technologie Supérieure [Montréal] (ETS), Algorithms and Software Architectures for Distributed and HPC Platforms (AVALON), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Laboratoire de l'Informatique du Parallélisme (LIP), and Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Computer Networks and Communications ,business.industry ,Data stream mining ,Computer science ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Computer Science Applications ,[INFO.INFO-NI]Computer Science [cs]/Networking and Internet Architecture [cs.NI] ,Hardware and Architecture ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,020201 artificial intelligence & image processing ,The Internet ,Enhanced Data Rates for GSM Evolution ,Latency (engineering) ,business ,Software ,Edge computing ,Information Systems ,Computer network - Abstract
International audience; Internet of Things (IoT) applications often require the processing of data streams generated by devices dispersed over a large geographical area. Traditionally, these data streams are forwarded to a distant cloud for processing, thus resulting in high application end-to-end latency. Recent work explores the combination of resources located in clouds and at the edges of the Internet, called cloud-edge infrastructure, for deploying Data Stream Processing (DSP) applications. Most previous work, however, fails to scale to very large IoT settings. This paper introduces deployment strategies for the placement of DSP applications on to cloud-edge infrastructure. The strategies split an application graph into regions and consider regions with stringent time requirements for edge placement. The proposed Aggregate End-to-End Latency Strategy with Region Patterns and Latency Awareness (AELS+RP+LA) decreases the number of evaluated resources when computing an operator’s placement by considering the communication overhead across computing resources. Simulation results show that, unlike the state-of-the-art, AELS+RP+LA scales to environments with more than 100k resources with negligible impact on the application end-to-end latency.
- Published
- 2023
34. Robust Task Offloading in Dynamic Edge Computing
- Author
-
Min Chen, Shigang Chen, Hongli Xu, He Huang, and Haibo Wang
- Subjects
Task (computing) ,Mobile edge computing ,Computer Networks and Communications ,Computer science ,Distributed computing ,Server ,Benchmark (computing) ,Task analysis ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Throughput (business) ,Software ,Edge computing - Abstract
Mobile edge computing achieves better application responsiveness by offloading tasks from end devices to edge servers installed at the vicinity. Practical scenarios such as post-disaster rescuing and battlefield monitoring make it attractive to use end devices themselves as edge servers. This however introduces a new challenge: Due to mobility and power limitation, the set of edge servers becomes dynamic. As some servers fail, the tasks that run on them will also fail. This paper introduces a new dynamic edge computing model and conducts the first study on robust task offloading which is tolerant to h server failures. We propose online primal-dual algorithms that offload tasks as they arrive. We evaluate the performance of our robust task offloading solutions through extensive simulations based on real task sets. The results show that our proposed solutions can well handle edge dynamics and achieve near optimal throughput (above 95%) compared to the optimal offline benchmark algorithm.
- Published
- 2023
35. A Novel Data Placement and Retrieval Service for Cooperative Edge Clouds
- Author
-
Deke Guo, Ge Wang, Xin Li, Junjie Xie, Honghui Chen, and Chen Qian
- Subjects
Mobile edge computing ,Computer Networks and Communications ,business.industry ,Computer science ,Routing table ,Distributed computing ,Cloud computing ,Computer Science Applications ,Data retrieval ,Hardware and Architecture ,Enhanced Data Rates for GSM Evolution ,Data as a service ,Routing (electronic design automation) ,business ,Software ,Edge computing ,Information Systems - Abstract
Mobile edge computing is a new paradigm in which the computing and storage resources are placed at the edge of the Internet. Data placement and retrieval are fundamental services of mobile edge computing when a network of edge clouds collaboratively provide data services. However existing methods such as distributed hash tables (DHTs) are not enough to achieve efficient data placement and retrieval services for cooperative edge clouds. This paper presents GRED, a novel data placement and retrieval service for mobile edge computing, which is efficient in not only the load balance but also routing path lengths and forwarding table sizes. GRED utilizes the programmable switches to support a virtual-space based DHT with only one overlay hop. Data location can be easily implemented on top of the GRED. We implement GRED in a P4 prototype, which provides a simple and efficient solution. Results from theoretical analysis, simulations, and experiments show that GRED can efficiently balance the load of edge clouds, and can fast answer data queries due to its low routing stretch.
- Published
- 2023
36. Towards Real-Time Video Caching at Edge Servers: A Cost-Aware Deep Q-Learning Solution
- Author
-
Lei Zhang, Yipeng Zhou, Laizhong Cui, Jiangchuan Liu, Erchao Ni, Zhi Wang, and Yuedong Xu
- Subjects
Computer science ,business.industry ,Q-learning ,Context (language use) ,Internet traffic ,Video quality ,Computer Science Applications ,Server ,Signal Processing ,Media Technology ,Hit rate ,Cache ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Computer network - Abstract
Given the rapid growth of user-generated videos, internet traffic has been heavily dominated by online video streaming. Caching videos on edge servers in close proximity to users has been an effective approach to reduce the backbone traffic and the request response time, as well as to improve the video quality on the user side. Video popularity, however, can be highly dynamic over time. The cost of cache replacement at edge servers, particularly that related to service interruption during replacement, is not yet well understood. This paper presents a novel lightweight video caching algorithm for edge servers, seeking to optimize the hit rate with real-time decisions and minimized cost. Inspired by recent advances in deep Q-learning, our DQN-based online video caching (DQN-OVC) makes effective use of the rich and readily available information from users and networks. We decompose the Q-value function as a product of the video value function and the action function, which significantly reduces the state space. We instantiate the action function for cost-aware caching decisions with low complexity so that the cached videos can be updated continuously and instantly with dynamic video popularity. We used video traces from Tencent, one of the largest online video providers in China, to evaluate the performance of our DQN-OVC and to compare it with state-of-the-art solutions. The results demonstrate that DQN-OVC significantly outperforms the baseline algorithms in the edge caching context.
- Published
- 2023
37. Popularity-Based Data Placement With Load Balancing in Edge Computing
- Author
-
Yu Wang and Xinliang Wei
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Cloud computing ,Load balancing (computing) ,Average path length ,Computer Science Applications ,Data access ,Hardware and Architecture ,Server ,Data deduplication ,Enhanced Data Rates for GSM Evolution ,business ,Software ,Edge computing ,Information Systems ,Computer network - Abstract
In recent years, edge computing has become an increasingly popular computing paradigm to enable real-time data processing and mobile intelligence. Edge computing allows computing at the edge of the network, where data is generated and distributed at the nearby edge servers to reduce the data access latency and improve data processing efficiency. One of the key challenges in data-intensive edge computing is how to place the data at the edge clouds effectively such that the access latency to the data is minimized. In this paper, we study such a data placement problem in edge computing while different data items have diverse popularity. We propose a popularity based placement method which maps both data items and edge servers to a virtual plane and places or retrieves data based on its virtual coordinate in the plane. We then further propose additional placement strategies to handle load balancing among edge servers via either offloading or data duplication. Simulation results show that our proposed strategies efficiently reduce the average path length of data access and the load-balancing strategies indeed provide an effective relief of storage pressures at certain overloaded servers.
- Published
- 2023
38. A new algorithm for detection of nodes failures and enhancement of network coverage and energy usage in wireless sensor networks
- Author
-
T. Mahalakshmi, Srinivasulu Asadi, P. Satyanarayana, Sivaram Rajeyyagari, Saad Alahmari, and R .Sivakami
- Subjects
010302 applied physics ,Atmosphere (unit) ,business.industry ,Wireless network ,Computer science ,02 engineering and technology ,General Medicine ,021001 nanoscience & nanotechnology ,Collision ,01 natural sciences ,law.invention ,Relay ,law ,Default gateway ,0103 physical sciences ,Node (computer science) ,Enhanced Data Rates for GSM Evolution ,0210 nano-technology ,business ,Wireless sensor network ,Computer network - Abstract
Present natural lives shows rising attention in various applications of the wireless sensor networks (WSNs). Significant applications are isolated and stiff regions where human involvement is hazardous or unfeasible. Some examples are space investigation, battlefield observation, costal, and edge security; nowadays, such networks are needful in several manufacturing and customer applications. A WSN is a device system, indicated as nodes; these nodes intellect the atmosphere and transfer the details collected from the ground through a wireless connection. Statistics are sent, probably via many hops, to a target that can use it nearby or is linked to various networks using the gateway. The sensor nodes might be stationary or changeable. Every node has clear data about the neighborhood or not. Communication over WSN over ecological hazards is a key issue. These constraints might impact the performance of the sensors/direction finding protocols and resource utilization; thus, they may show the way to the node crash situation, i.e., software/hardware breakdown, safety threats, extreme power utilization, etc. It is essential to examine the collision of breakdown over system performance. In the projected method, relay nodes are used as secure nodes for the positioned sensor nodes. The projected method is separated into two phases: the intra segregation stage and inter segregation stage. In the intra segregation stage, the unnecessary nodes reach nearer to the segregation border, and dispersion slowly enlarges the coverage. The projected model is examined through simulations and compares the results with existing methods.
- Published
- 2023
39. Research and Markets: Enhanced Data Rates for GSM Evolution (EDGE) - A Comprehensive Global Strategic Business Report
- Subjects
Broadcom Corp. -- Market research -- Reports ,Analog Devices Inc. -- Market research -- Reports ,Bouygues Telecom S.A. -- Market research -- Reports ,Compagnie Financière Alcatel -- Market research -- Reports ,Datang Telecom Technology and Industry Group -- Market research -- Reports ,Telecommunications equipment industry -- Market research -- Reports ,Semiconductor industry -- Reports -- Market research ,GSM (Global System for Mobile Communications) -- Market research -- Reports ,Telecommunications services industry -- Reports -- Market research ,Communications industry -- Reports -- Market research ,Semiconductor industry ,GSM ,Marketing research ,Telecommunications services industry ,Telecommunications equipment industry ,Business ,Business, international - Abstract
DUBLIN -- Research and Markets (http://www.researchandmarkets.com/research/b9b3e8/enhanced_data_rate) has announced the addition of the 'Enhanced Data Rates for GSM Evolution (EDGE) - Global Strategic Business Report' report to their offering. This report [...]
- Published
- 2010
40. Research and Markets: Enhanced Data Rates for GSM Evolution (EDGE) - A Comprehensive Global Strategic Business Report
- Subjects
Broadcom Corp. ,Analog Devices Inc. ,Bouygues Telecom S.A. ,Compagnie Financière Alcatel ,Telecommunications equipment industry ,Semiconductor industry ,Telecommunications services industry ,Communications industry ,Semiconductor industry ,Telecommunications services industry ,Telecommunications equipment industry ,General interest ,News, opinion and commentary - Abstract
Dublin, Jun 30, 2010 (M2 PRESSWIRE via COMTEX) -- Research and Markets (http://www.researchandmarkets.com/research/005c38/enhanced_data_rate) has announced the addition of the 'Enhanced Data Rates for GSM Evolution (EDGE) - Global Strategic Business [...]
- Published
- 2010
41. Enhanced Data Rates for GSM Evolution (EDGE) - A Comprehensive Global Strategic Business Report
- Subjects
Broadcom Corp. ,Analog Devices Inc. ,Bouygues Telecom S.A. ,Compagnie Financière Alcatel ,Datang Telecom Technology and Industry Group ,Telecommunications equipment industry ,Semiconductor industry ,Telecommunications services industry ,Communications industry ,Semiconductor industry ,Telecommunications services industry ,Telecommunications equipment industry ,Business ,Business, international - Abstract
M2 PRESSWIRE-30 June 2010-Research and Markets: Enhanced Data Rates for GSM Evolution (EDGE) - A Comprehensive Global Strategic Business Report(C)1994-2010 M2 COMMUNICATIONS RDATE:30062010 Dublin - Research and Markets (http://www.researchandmarkets.com/research/005c38/enhanced_data_rate) has [...]
- Published
- 2010
42. Reportlinker Adds Global Enhanced Data Rates for GSM Evolution (EDGE) Industry
- Subjects
Semiconductor industry ,Telecommunications services industry ,Wireless telecommunications service ,Telecommunications equipment industry ,Cellular telephone services industry ,Telecommunications equipment industry ,Semiconductor industry ,Telecommunications services industry ,Communications industry ,Broadcom Corp. ,Bouygues Telecom S.A. ,Analog Devices Inc. ,Sony Ericsson Mobile Communications AB ,Nokia Networks ,Datang Telecom Technology and Industry Group ,QUALCOMM Inc. ,America Movil S.A. de C.V. ,Nokia Corp. ,Motorola Solutions Inc. - Abstract
NEW YORK, June 24 /PRNewswire/ -- Reportlinker.com announces that a new market research report is available in its catalogue: http://www.reportlinker.com/p0209278/Global-Enhanced-Data-Rates-for-GSM-Evolution-(EDGE)-Industry.html?utm_source=prnewswire&utm_medium=pr&utm_campaign=prnewswire http://www.reportlinker.com/p0209278/Global-Enhanced-Data-Rates-for-GSM-Evolution-EDGE-Industry.html This report analyzes the Global market for Enhanced Data [...]
- Published
- 2010
43. Efficient Point-in-Polygon Tests by Grids Without the Trouble of Tuning the Grid Resolutions
- Author
-
Wencheng Wang and Shengchun Wang
- Subjects
Computer science ,Grid ,Computer Graphics and Computer-Aided Design ,Point in polygon ,Parallel processing (DSP implementation) ,Intersection ,Signal Processing ,Preprocessor ,Point (geometry) ,Computer Vision and Pattern Recognition ,Enhanced Data Rates for GSM Evolution ,Algorithm ,Time complexity ,Computer Science::Databases ,Software - Abstract
The grid-based approach is popular for point-in-polygon tests. However, there is a trade-off between the preprocessing and the inclusion test, which always requires the grid resolutions to be tuned. In this article, we address this challenge by enhancing the grid structure using y-axis-aligned stripes, which are formed by the y-axis-aligned lines passing through the endpoints of the edge segments in the cell, thereby managing the edge segments in each grid cell. Moreover, we precompute the inclusion properties of the x-axis-aligned top borders of the stripes during preprocessing. Therefore, to answer a query point with the ray crossing method, we can emit a ray from the point to propagate upwards until the ray arrives at the top border of a stripe. We thoroughly consider singular cases to guarantee each query point can be answered in the stripe that contains the point. In our method, the computational load can be decreased, as one coordinate of the intersection point between the ray and an edge is known in advance, and parallel computing can be well exploited because the branching operations for determining whether an edge intersects with the ray are saved. Experimental results show that the efficiency of our method does not vary much with respect to the grid resolutions, so the trouble of tuning grid resolutions can be avoided. Ultimately, our method with a low grid resolution can reduce the preprocessing time and still achieve a higher inclusion test efficiency than the existing methods with a high grid resolution, especially on GPUs.
- Published
- 2022
44. Analysis and Design of Zero-Voltage-Switching Multiphase AC/DC Converters
- Author
-
Min Chen, Dehong Xu, Keyan Shi, and Deng Jinyi
- Subjects
Switching cycle ,Multi phase ,Computer science ,Photovoltaic system ,Electronic engineering ,Energy Engineering and Power Technology ,Inverter ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Converters ,Zero voltage switching ,Pulse-width modulation - Abstract
In this paper, the generic analysis and design of Zero-Voltage-Switching (ZVS) multi-phase AC/DC converters are proposed. With the Edge Aligned PWM (EA-PWM) scheme, all switches of ZVS multi-phase AC/DC converters can realize ZVS operation. Besides, the switching frequency is fixed and the auxiliary switch only operates once in each switching period. Its operation stages in a switching cycle are analyzed and a generic ZVS condition is derived. Based on the proposed theory, a ZVS two-stage three-phase photovoltaic (PV) inverter is investigated, which is regarded as a 5-phase converter. Its ZVS condition under different working conditions is discussed. Finally, a 10kW prototype with 150kHz switching frequency of the ZVS two-stage three-phase PV inverter is built and the experiment results are given to verify the analysis. In addition, the extension of the ZVS multi-phase converter in different applications is introduced.
- Published
- 2022
45. Partial Synchronization to Accelerate Federated Learning Over Relay-Assisted Edge Networks
- Author
-
Albert Y. Zomaya, Zhihao Qu, Bin Tang, Yi Wang, Song Guo, Haozhao Wang, and Baoliu Ye
- Subjects
Scheme (programming language) ,Computer Networks and Communications ,Computer science ,Distributed computing ,Process (computing) ,law.invention ,Rate of convergence ,Relay ,law ,Synchronization (computer science) ,Convergence (routing) ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,computer ,Mobile device ,Software ,computer.programming_language - Abstract
Federated Learning (FL) is a promising machine learning paradigm to cooperatively train a global model with highly distributed data located on mobile devices. Aiming to optimize the communication efficiency for gradient aggregation and model synchronization among large-scale devices, we propose a relay-assisted FL framework. By breaking the traditional transmission-order constraint and exploiting the broadcast characteristic of relay nodes, we design a novel synchronization scheme named Partial Synchronization Parallel (PSP), in which models and gradients are transmitted simultaneously and aggregated at relay nodes, resulting in traffic reduction. We prove that PSP has the same convergence rate as the sequential synchronization approaches via rigorous analysis. To further accelerate the training process, we integrate PSP with any unbiased and error-bounded compression technologies and prove that the convergence properties of the resulting scheme still hold. Extensive experiments are conducted in a distributed cluster environment with real-world datasets and the results demonstrate that our proposed approach reduces the training time up to 37\% compared to state-of-the-art methods.
- Published
- 2022
46. Cost-Effective User Allocation in 5G NOMA-Based Mobile Edge Computing Systems
- Author
-
Mohamed Abdelrazek, Yun Yang, Qiang He, Feifei Chen, John Hosking, John Grundy, Phu Lai, and Guangming Cui
- Subjects
Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,020206 networking & telecommunications ,02 engineering and technology ,Transmitter power output ,medicine.disease ,Noma ,Base station ,Server ,Telecommunications link ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Software ,5G ,Computer network - Abstract
Mobile edge computing (MEC) allows edge servers to be placed at cellular base stations. App vendors like Uber and YouTube can rent computing resources and deploy latency-sensitive applications on edge servers for their users to access. Non-orthogonal multiple access (NOMA) is an emerging technique that facilitates the massive connectivity of 5G networks, further enhancing the capability of MEC. The edge user allocation (EUA) problem faces new challenges in 5G NOMA-based MEC systems. In this study, we investigate the EUA problem in a multi-cell multi-channel downlink power-domain NOMA-based MEC system. The main objective is to help mobile app vendors maximize their benefit by allocating maximum users to edge servers in a specific area at the lowest computing resource and transmit power costs. To this end, we introduce a decentralized game-theoretic approach to effectively select a channel and edge server for each user while fulfilling their resource and data rate requirements. We theoretically and experimentally evaluate our solution, which significantly outperforms various state-of-the-art and baseline approaches.
- Published
- 2022
47. Flow-Edge Guided Unsupervised Video Object Segmentation
- Author
-
Fumin Shen, Heng Tao Shen, Xiaofeng Zhu, Yifeng Zhou, and Xing Xu
- Subjects
business.industry ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Object (computer science) ,Hallucinating ,Feature (computer vision) ,Media Technology ,Segmentation ,Computer vision ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Encoder - Abstract
Recently, deep learning techniques have achieved significant improvements in unsupervised video object segmentation (UVOS). However, many of existing approach cannot accurately identify the foreground objects and the background as they commonly use the coarse temporal features (e.g. , optical flow and multi-frames attention). In this paper, we present a novel model termed Flow Edge-based Motion-Attentive Network (FEM-Net), to address the unsupervised video object segmentation problem. Firstly, a motion-attentive encoder is used to jointly learn the spatial and temporal features. Then, a Flow Edge Connect (FEC) module is designed to hallucinate edges of the ambiguous or missing region in the optical flow. During the segmentation stage, the complementary temporal feature composed by the motion-attentive feature and flow edge is fed into a decoder to infer the salient foreground objects. Experimental results on two challenging public benchmarks (i.e. DAVIS-16 and FBMS) demonstrate that the proposed FEM-Net compares favorably against the state-of-the-art methods.
- Published
- 2022
48. Edge Intelligent Joint Optimization for Lifetime and Latency in Large-Scale Cyber–Physical Systems
- Author
-
Jian Weng, Kun Cao, Yangguang Cui, Wuzheng Tan, and Zhiquan Liu
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Distributed computing ,Reliability (computer networking) ,Cyber-physical system ,Evolutionary algorithm ,Energy consumption ,Replication (computing) ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Computation offloading ,Local search (optimization) ,Enhanced Data Rates for GSM Evolution ,business ,Information Systems - Abstract
In recent years, the exploration on large-scale cyber-physical systems (CPSs) has become a fertile research field of significant impact. Large-scale CPS applications cover not only manufacturing and production areas but also daily living domains. Traditional solutions dedicated for large-scale CPSs mainly concentrate on the service latency or reliability optimization, but neglect the resultant negative impact on system lifetime. In this paper, we conduct the first study on jointly optimizing the service latency and system lifetime subject to the constraints of reliability, energy consumption, and schedulability for large-scale CPSs. We propose an edge intelligent solution composed of offline and online phases. At offline phase, the long short-term memory (LSTM) technique is leveraged to predict task offloading rates at individual user groups. Afterwards, multi-objective evolutionary algorithm with dual local search (DLS-MOEA) is exploited to determine optimal system static settings of computation offloading mapping and task replication number. At online phase, an affinity-driven scheme incurring minimal system dynamic overheads is designed to deal with the inherent mobility of terminal users. We also build an algorithm validation platform upon which extensive simulation experiments are carried out. Experimental results show that our offline and online schemes outperform the state-of-the-art benchmarking methods by 27.1% and 43.5%, respectively.
- Published
- 2022
49. Deep Learning in Security of Internet of Things
- Author
-
Yuxi Li, Zhihan Lv, Houbing Song, and Yue Zuo
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Intrusion detection system ,Enterprise information security architecture ,Modernization theory ,Computer security ,computer.software_genre ,Encryption ,Computer Science Applications ,Smart grid ,Hardware and Architecture ,Home automation ,Signal Processing ,The Internet ,Enhanced Data Rates for GSM Evolution ,business ,computer ,Information Systems - Abstract
Internet of Things (IoT) technology is increasingly prominent in the current stage of social development. All walks of life have begun to implement the Internet of Things integration technology, so as to strive to promote industrial modernization, intelligence and digitalization. In this case, how to link high-risk network activities with entities has become the primary issue for promoting industrial development. However, at this stage, the security issues in the development of the IoT technology have contradictions that are difficult to resolve. According to this situation, how to make system defense intelligent and replace manual monitoring has become the future of the development of security architecture. This paper combines existing security research to explore the possibility of deep learning (DL) in upgrading the IoT security architecture, discusses how the Internet of Things can identify and respond to cyber attacks, and how to encrypt edge data transmission. Moreover, this paper discusses security research in application fields such as Industrial Internet of Things, Internet of Vehicles, smart grid, smart home and smart medical. Then we summarized the areas that can be improved in future technological development, including sharing computing power through the edge NPU central device and closely combining the environmental simulation model with the actual environment, as well as malicious code detection, intrusion detection, production safety, vulnerability detection, fault diagnosis and blockchain technology.
- Published
- 2022
50. Service Versus Protection: A Bayesian Learning Approach for Trust Provisioning in Edge of Things Environment
- Author
-
Avinash Kaur, Parminder Singh, Ranbir Singh Batth, Mehedi Masud, and Gagangeet Singh Aujla
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Quality of service ,Provisioning ,Bayesian inference ,Computer Science Applications ,Hardware and Architecture ,Smart city ,Signal Processing ,Trust management (information system) ,Enhanced Data Rates for GSM Evolution ,business ,Mobile device ,Wearable technology ,Information Systems ,Computer network - Abstract
Edge of Things (EoT) technology enables end-users participation with smart-sensors and mobile devices (such as smartphones, wearable devices) to the smart devices across the smart city. Trust management is the main challenge in EoT infrastructure to consider the trusted participants. The Quality of Service (QoS) is highly affected by malicious users with fake or altered data. In this paper, a Robust Trust Management (RTM) scheme is designed based on Bayesian learning and collaboration filtering. The proposed RTM model is regularly updated after a specific interval with the significant decay value to the current calculated scores to update the behavior changes quickly. The dynamic characteristics of edge nodes are analyzed with the new probability score mechanism from recent services’ behavior. The performance of the proposed trust management scheme is evaluated in a simulated environment. The percentage of collaboration devices are tuned as 10%, 50% and 100%. The maximum accuracy of 99.8% is achieved from the proposed RTM scheme. The experimental results demonstrate that the RTM scheme shows better performance than the existing techniques in filtering malicious behavior and accuracy.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.