25 results on '"Xiaoqiang Ma"'
Search Results
2. Keep Your Data Locally: Federated-Learning-Based Data Privacy Preservation in Edge Computing
- Author
-
Chen Wang, Yang Yang, Gaoyang Liu, and Xiaoqiang Ma
- Subjects
Information privacy ,Edge device ,Computer Networks and Communications ,Computer science ,business.industry ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Data modeling ,Information sensitivity ,Upload ,Hardware and Architecture ,Server ,0202 electrical engineering, electronic engineering, information engineering ,business ,Software ,Edge computing ,Information Systems ,Computer network - Abstract
Recently, edge computing has attracted significant interest due to its ability to extend cloud computing utilities and services to the network edge with low response times and communication costs. In general, edge computing requires mobile users to upload their raw data to a centralized data server for further processing. However, these data usually contain sensitive information about mobile users that the users do not want to reveal, such as sexual orientation, political stance, health status, and service access history. The transmission of user data increases the leakage risk of data privacy since many extra devices can get access to these data. In this article, we attempt to keep the data of edge devices and end users on their local storage to resist the leakage of user privacy. To this end, we integrate federated learning and edge computing to propose P2FEC, a privacy-preserving framework that can construct a unified deep learning model across multiple users or devices without uploading their data to a centralized server. Furthermore, we use membership inference attacks as a case study for the privacy analysis of edge computing. The experiments show that the model constructed by our framework can achieve similar prediction performance and stricter protection of data privacy, compared to the model trained by standard edge computing.
- Published
- 2021
3. Enhancing Performance and Energy Efficiency for Hybrid Workloads in Virtualized Cloud Environment
- Author
-
Haiyang Wang, Jiangchuan Liu, Xiaoqiang Ma, Ryan Shea, and Chi Xu
- Subjects
020203 distributed computing ,Computer Networks and Communications ,Computer science ,Hardware virtualization ,business.industry ,020206 networking & telecommunications ,Hypervisor ,Cloud computing ,02 engineering and technology ,Energy consumption ,Virtualization ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,Virtual machine ,Network service ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,business ,computer ,Software ,Information Systems ,Efficient energy use - Abstract
Virtualization has attained mainstream status in enterprise IT industry. Despite its widespread adoption, it is known that virtualization also introduces non-trivial overhead when tasks are executed on a virtual machine (VM). In particular, a combined effect from device virtualization overhead and CPU scheduling latency can cause performance degradation when computation intensive tasks and I/O intensive tasks are co-located on a VM. Such an interference also causes extra energy consumption. In this paper, we present Hylics , a novel solution that enables efficient data traverse paths for both I/O and computation intensive workloads. This is achieved with the provision of in-memory file system and network service at the hypervisor level. Several important design issues are pinpointed and addressed during our prototype implementation, including efficient intermediate data sharing, network service offloading, and QoS-aware memory usage management. Based on our real-world deployment on KVM, we show that Hylics can significantly improve computation and I/O performance for hybrid workloads. Moreover, this design also alleviates the existing virtualization overhead and naturally optimizes the overall energy efficiency.
- Published
- 2021
4. Car4Pac: Last Mile Parcel Delivery Through Intelligent Car Trip Sharing
- Author
-
Feng Wang, Xiaoyi Fan, Fangxin Wang, Xiaoqiang Ma, Jiangchuan Liu, and Yifei Zhu
- Subjects
050210 logistics & transportation ,Leverage (finance) ,Landmark ,Computer science ,Mechanical Engineering ,05 social sciences ,Parcel delivery ,Computer Science Applications ,Transport engineering ,0502 economics and business ,Automotive Engineering ,Traffic conditions ,Task analysis ,Fuel efficiency ,TRIPS architecture ,Last mile - Abstract
The explosion of online shopping brings great challenges to traditional logistics industry, where the massive parcels and tight delivery deadline impose a large cost on the delivery process, in particular the last mile parcel delivery. On the other hand, modern cities never lack transportation resources such as the private car trips. Motivated by these observations, we propose a novel and effective last mile parcel delivery mechanism through car trip sharing, to leverage the available private car trips to incidentally deliver parcels during their original trips. To achieve this, the major challenges lie in how to accurately estimate the parcel delivery trip cost and assign proper tasks to suitable car trips to maximize the overall performance. To this end, we develop Car4Pac , an intelligent last mile parcel delivery system to address these challenges. Leveraging the real-world massive car trip trajectories, we first build up a 3D (time-dependent, driver-dependent and vehicle-dependent) landmark graph that accurately predicts the travel time and fuel consumption of each road segment. Our prediction method considers not only traffic conditions of different times, but also driving skills of different people and fuel efficiencies of different vehicles. We then develop a two-stage solution towards the parcel delivery task assignment, which is optimal for one-to-one assignment and yields high-quality results for many-to-one assignment. Our extensive real-world trace driven evaluations further demonstrate the superiority of our Car4Pac solution.
- Published
- 2020
5. TIMCC: On Data Freshness in Privacy-Preserving Incentive Mechanism Design for Continuous Crowdsensing Using Reverse Auction
- Author
-
Xiaoqiang Ma, Weiwei Deng, Feng Wang, Menglan Hu, Fei Chen, and Mohammad Mehedi Hassan
- Subjects
Mechanism design ,General Computer Science ,Computer science ,SIGNAL (programming language) ,General Engineering ,Crowdsensing ,privacy-preserving ,Rationality ,age of data ,Energy consumption ,Computer security ,computer.software_genre ,Reverse auction ,Incentive ,Key (cryptography) ,incentive mechanism ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Metric (unit) ,lcsh:TK1-9971 ,computer - Abstract
As an emerging paradigm that leverages the wisdom and efforts of the crowd, mobile crowdsensing has shown its great potential to collect distributed data. The crowd may incur such costs and risks as energy consumption, memory consumption, and privacy leakage when performing various tasks, so they may not be willing to participate in crowdsensing tasks unless they are well-paid. Hence, a proper privacy-preserving incentive mechanism is of great significance to motivate users to join, which has attracted a lot of research efforts. Most of the existing works regard tasks as one-shot tasks, which may not work very well for the type of tasks that requires continuous monitoring, e.g., WIFI signal sensing, where the WiFi signal may vary over time, and users are required to contribute continuous efforts. The incentive mechanism for continuous crowdsensing has yet to be investigated, where the corresponding tasks need continuous efforts of users, and the freshness of the sensed data is very important. In this paper, we design TIMCC, a privacy-preserving incentive mechanism for continuous crowdsensing. In contrast to most existing studies that treat tasks as one-shot tasks, we consider the tasks that require users to contribute continuous efforts, where the freshness of data is a key factor impacting the value of data, which further determines the rewards. We introduce a metric named age of data that is defined as the amount of time elapsed since the generation of the data to capture the freshness of data. We adopt the reverse auction framework to model the connection between the platform and the users. We prove that the proposed mechanism satisfies individual rationality, computational efficiency, and truthfulness. Simulation results further validate our theoretical analysis and the effectiveness of the proposed mechanism.
- Published
- 2020
6. Adaptive Wireless Video Streaming Based on Edge Computing: Opportunities and Approaches
- Author
-
Yanrong Peng, Jiangchuan Liu, Fei Chen, Hongbo Jiang, Wenting Ding, Desheng Wang, and Xiaoqiang Ma
- Subjects
Information Systems and Management ,Dynamic network analysis ,Computer Networks and Communications ,Computer science ,business.industry ,Core network ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Transcoding ,computer.software_genre ,Computer Science Applications ,Dynamic Adaptive Streaming over HTTP ,Hardware and Architecture ,Server ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Enhanced Data Rates for GSM Evolution ,business ,computer ,Edge computing ,Computer network - Abstract
Dynamic Adaptive Streaming over HTTP (DASH) has been widely adopted to deal with such user diversity as network conditions and device capabilities. In DASH systems, the computation-intensive transcoding is the key technology to enable video rate adaptation, and cloud has become a preferred solution for massive video transcoding. Yet the cloud-based solution has the following two drawbacks. First, a video stream now has multiple versions after transcoding, which increases the network traffic traversing the core network. Second, the transcoding strategy is normally fixed and thus is not flexible to adapt to the dynamic change of viewers. Considering that mobile users, who normally experience dynamic network conditions from time to time, have occupied a very large portion of the total users, adaptive wireless transcoding is of great importance. To this end, we propose an adaptive wireless video transcoding framework based on the emerging edge computing paradigm by deploying edge transcoding servers close to base stations. With this design, the core network only needs to send the source video stream to the edge transcoding server rather than one stream for each viewer, and thus the network traffic across the core network is significantly reduced. Meanwhile, our edge transcoding server cooperates with the base station to transcode videos at a finer granularity according to the obtained users’ channel conditions, which smartly adjusts the transcoding strategy to tackle with time-varying wireless channels. In order to improve the bandwidth utilization, we also develop efficient bandwidth adjustment algorithms that adaptively allocate the spectrum resources to individual mobile users. We validate the effectiveness of our proposed edge computing based framework through extensive simulations, which confirm the superiority of our framework.
- Published
- 2019
7. Self-Deployable Indoor Localization With Acoustic-Enabled IoT Devices Exploiting Participatory Sensing
- Author
-
Doudou Cao, Xiaoqiang Ma, Menglan Hu, Jiangchuan Liu, Chao Cai, and Qingxia Li
- Subjects
Participatory sensing ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,Ranging ,02 engineering and technology ,Computer Science Applications ,Beacon ,Transmission (telecommunications) ,Hardware and Architecture ,020204 information systems ,Server ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Internet of Things ,business ,Information Systems - Abstract
Indoor localization has witnessed a rapid development in the past few decades. Tremendous solutions have been put forwarded in the literature and the localization accuracy has reach an unprecedent centimeter-level. Among the available approaches, acoustic-enabled solutions have attracted much attention. They customarily achieve decimeter-level localization accuracy with affordable infrastructure costs. However, there still exist several open issues for the acoustic-based approaches which prohibit their wide-scale adoptions. First, although extra infrastructures (i.e., beacons) are economical, deployment, and maintenance can incur excessive labor cost. Second, current approaches have much latency to obtain a location fix, making it infeasible for mobile target tracking. Third, the localization performance of current solutions degrades easily by the near-far problem, multipath effect, and device diversity. To address these issues, this paper presents an asynchronous acoustic-based localization system with participatory sensing. We leverage the collaborative efforts of the participatory users who are relatively stationary in indoor environments as virtual anchors (VAs) to eliminate the predeployment and post-maintenance costs incurred in traditional anchor-based solutions. To mitigate the latency to obtain a location fix, we design an orthogonal ranging mechanism to enable concurrent beacon message transmission, which is $2\boldsymbol \times $ faster than previous work in obtaining a location fix. Moreover, we propose a robust method to address the near-far problem and device diversity, and we conquer the multipath problem via a genetic algorithm-based approach. Our VA-based system is self-deployable, cost-effective, and robust to environmental dynamics. We have implemented and evaluated a system prototype, demonstrating a median accuracy of 0.98 m in typical indoor settings.
- Published
- 2019
8. Joint Routing and Scheduling for Vehicle-Assisted Multidrone Surveillance
- Author
-
Menglan Hu, Xiaoqiang Ma, Jiangchuan Liu, Bo Li, Kai Peng, Weidong Liu, and Wenqing Cheng
- Subjects
Job shop scheduling ,Computer Networks and Communications ,Computer science ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Scheduling (computing) ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Motion planning ,Information Systems - Abstract
In recent decades, unmanned aerial vehicles (UAVs, also known as drones) equipped with multiple sensors have been widely utilized in various applications. Nevertheless, constrained by limited battery capacities, the hovering time of UAVs is quite limited, prohibiting them from serving a wide area. To cater with remote sensing applications, people often employ vehicles to transport, launch, and recycle them. The so-called vehicle-drone cooperation (VDC) benefits from both the far driving distance of vehicles and the high mobility of UAVs. Efficient routing and scheduling can greatly reduce time consumption and financial expenses incurred in VDC. However, previous works in vehicle-drone cooperative sensing considered only one drone, thus unable to simultaneously cover multiple targets distributed in an area. Using multiple drones to sense different targets in parallel can significantly promote efficiency and expand service areas. Therefore, we propose a novel problem, referred to as vehicle-assisted multidrone routing and scheduling problem. To tackle the problem, we contribute an efficient algorithm, referred to as vehicle-assisted multi-UAV routing and scheduling algorithm (VURA). In VURA, we maintain and iteratively update a memory containing candidate UAV routes. VURA works by iteratively deriving solutions based on UAV routes picked from the memory. In every iteration, VURA jointly optimizes anchor point selection, path planning, and tour assignment via nested optimization operations. To the best of our knowledge, we are the first to tackle this novel yet challenging problem. Finally, performance evaluation is presented to demonstrate the effectiveness and efficiency of our algorithm when compared with existing solutions.
- Published
- 2019
9. Accurate Ranging on Acoustic-Enabled IoT Devices
- Author
-
Kai Peng, Jiangchuan Liu, Menglan Hu, Xiaoqiang Ma, and Chao Cai
- Subjects
Computer Networks and Communications ,Computer science ,Orthogonal frequency-division multiplexing ,05 social sciences ,020206 networking & telecommunications ,Ranging ,02 engineering and technology ,Signal ,Synchronization ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,Key (cryptography) ,0501 psychology and cognitive sciences ,Frequency modulation ,050107 human factors ,Information Systems - Abstract
The enabling Internet-of-Things technology has inspired many innovative sensing mechanisms by repurposing the onboard sensors. Leveraging the built-in acoustic sensors for ranging is among one of the interesting applications. However, among the few studies on acoustic ranging, the one-way sensing method suffers from synchronization errors and requires cumbersome kernel modifications; the other two-way approaches overcome these shortcomings, but they are sensitive to system delays. In this case, this paper proposes a novel lightweight one-way sensing paradigm without the above drawbacks. The key insight of this paper is to perform ranging by estimating the propagation time of acoustic signals via linear frequency modulation signal mixing. Such a signal mix operation can translate range estimation into fine-grain frequency estimation, thereby enhancing ranging accuracy. In addition, our system can have multiple receivers co-exist and thus the measurement dimensions are boosted. We have implemented and evaluated our system prototype in real-world settings. The prototype demonstrated centimeter-level ranging performance.
- Published
- 2019
10. Demystifying the Crowd Intelligence in Last Mile Parcel Delivery for Smart Cities
- Author
-
Jiangchuan Liu, Feng Wang, Fangxin Wang, and Xiaoqiang Ma
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Parcel delivery ,Scheduling (computing) ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,The Internet ,Delivery system ,Last mile ,business ,Telecommunications ,Internet of Things ,Software ,Information Systems - Abstract
Recent years have witnessed an explosive growth of online shopping, which has posted unprecedented pressure on the logistics industry, especially the last mile parcel delivery. Existing solutions mostly rely on dedicated couriers, which suffer from high cost and low elasticity when dealing with a massive amount of local addresses. Advances in the Internet of Things, however, have enabled vehicle information to be readily accessible anytime anywhere, forming an Internet of Vehicles (IoV), which further enables intelligent vehicle scheduling and management. New opportunities therefore arise toward efficient and elastic last mile delivery for smart cities. In this article, we seek novel solutions to improve the last mile parcel delivery with crowd intelligence. We first review the existing and emerging solutions for last mile parcel delivery. We then discuss the advances of the ride-sharing- based delivery mechanism, identifying the unique opportunities and challenges therein. We further present Car4Pac, an IoV-enabled intelligent ride-sharing-based delivery system for smart cities, and demonstrate its superiority with real trace-driven evaluations.
- Published
- 2019
11. Indoor Navigation With Virtual Graph Representation: Exploiting Peak Intensities of Unmodulated Luminaries
- Author
-
Guoyin Jiang, Xiaoqiang Ma, Fu Xiao, Jiangchuan Liu, Hongbo Jiang, Yufu Jia, and Wenping Liu
- Subjects
Computer Networks and Communications ,Computer science ,Real-time computing ,Navigation system ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Software deployment ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,Electrical and Electronic Engineering ,Android (operating system) ,Software ,Reference frame - Abstract
The ubiquitous luminaries provide a new dimension for indoor navigation, as they are often well-structured and the visible light is reliable for its multipath-free nature. However, existing visible light-based technologies, which are generally frequency-based, require the modulation on light sources, modification to the device, or mounting extra devices. The combination of the cost-extensive floor map and the localization system with constraints on customized hardwares for capturing the flashing frequencies, no doubt, hinders the deployment of indoor navigation systems at scale in, nowadays, smart cities. In this paper, we provide a new perspective of indoor navigation on top of the virtual graph representation. The main idea of our proposed navigation system, named PILOT, stems from exploiting the peak intensities of ubiquitous unmodulated luminaries. In PILOT, the pedestrian paths with enriched sensory data are organically integrated to derive a meaningful graph, where each vertex corresponds to a light source and pairwise adjacent vertices (or light sources) form an edge with a computed length and direction. The graph, then, serves as a global reference frame for indoor navigation while avoiding the usage of pre-deployed floor maps, localization systems, or additional hardwares. We have implemented a prototype of PILOT on the Android platform, and extensive experiments in typical indoor environments demonstrate its effectiveness and efficiency.
- Published
- 2019
12. A Survey on Deep Learning Empowered IoT Applications
- Author
-
Tai Yao, Fangxin Wang, Xiaoqiang Ma, Menglan Hu, Yan Dong, Jiangchuan Liu, and Wei Liu
- Subjects
General Computer Science ,smart home ,Computer science ,Internet of Things ,02 engineering and technology ,smart healthcare ,Home automation ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Electrical and Electronic Engineering ,smart transportation ,business.industry ,Deep learning ,General Engineering ,deep learning ,020206 networking & telecommunications ,Robotics ,Data science ,020201 artificial intelligence & image processing ,The Internet ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,Intelligent control ,lcsh:TK1-9971 ,Mobile device - Abstract
The Internet of Things (IoT) is widely regarded as a key component of the Internet of the future and thereby has drawn significant interests in recent years. IoT consists of billions of intelligent and communicating “things”, which further extend borders of the world with physical and virtual entities. Such ubiquitous smart things produce massive data every day, posing urgent demands on quick data analysis on various smart mobile devices. Fortunately, the recent breakthroughs in deep learning have enabled us to address the problem in an elegant way. Deep models can be exported to process massive sensor data and learn underlying features quickly and efficiently for various IoT applications on smart devices. In this article, we survey the literature on leveraging deep learning to various IoT applications. We aim to give insights on how deep learning tools can be applied from diverse perspectives to empower IoT applications in four representative domains, including smart healthcare, smart home, smart transportation, and smart industry. A main thrust is to seamlessly merge the two disciplines of deep learning and IoT, resulting in a wide-range of new designs in IoT applications, such as health monitoring, disease analysis, indoor localization, intelligent control, home robotics, traffic prediction, traffic monitoring, autonomous driving, and manufacture inspection. We also discuss a set of issues, challenges, and future research directions that leverage deep learning to empower IoT applications, which may motivate and inspire further developments in this promising field.
- Published
- 2019
13. Periodic Charging for Wireless Sensor Networks With Multiple Portable Chargers
- Author
-
Jiangchuan Liu, Pan Zhou, Ziyi Chen, Menglan Hu, Xiaoqiang Ma, and Kai Peng
- Subjects
Scheme (programming language) ,General Computer Science ,Computer science ,Node (networking) ,Real-time computing ,General Engineering ,Wireless sensor networks ,power replenishment ,Scheduling (computing) ,Power (physics) ,Task (computing) ,Hardware_GENERAL ,Software deployment ,Computer Science::Networking and Internet Architecture ,General Materials Science ,scheduling ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Motion planning ,path planning ,lcsh:TK1-9971 ,computer ,Wireless sensor network ,computer.programming_language - Abstract
Finite battery capacity limits the network lifetime of wireless sensor networks, and thus severely impedes the deployment of large scale sensor networks. To prolong the lifetime, researchers utilize mobile chargers to recharge sensors with external power sources. In this paper, we study both periodic charging time scheduling and charging path planning with multiple chargers. First, we present an efficient slot-based periodic charging time scheduling algorithm with both a fine-grained node classification scheme to prevent unnecessary visits of energy-sufficient nodes, and a balanced charging task assignment scheme to avoid charging starvation. To further enhance charging efficiency, we also propose a charging path planning algorithm, which enables parallel power replenishment with multiple chargers. The simulation results show that our algorithms are effective and competitive when compared with existing algorithms.
- Published
- 2019
14. CP-Link: Exploiting Continuous Spatio-Temporal Check-in Patterns for User Identity Linkage
- Author
-
Xiaoqiang Ma, Fengxiang Ding, Kai Peng, Yang Yang, and Chen Wang
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering ,Software - Published
- 2022
15. Enabling Relay-Assisted D2D Communication for Cellular Networks: Algorithm and Protocols
- Author
-
Hongbo Jiang, Xiaoqiang Ma, Tingwei Liu, and John C. S. Lui
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,05 social sciences ,050801 communication & media studies ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,Computer Science Applications ,law.invention ,Spread spectrum ,0508 media and communications ,Hardware and Architecture ,Relay ,law ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Cellular network ,Resource allocation ,Resource management ,Communications protocol ,business ,Information Systems ,Computer network - Abstract
Recently, there is a growing emphasis on device-to-device (D2D) communication, which is the key component of the Internet-of-Things ecosystem. D2D communication can operate on the licensed spectrum of cellular networks so as to improve the spectrum utilization. In this paper, we focus on the resource allocation problem for general multihop D2D communication and introduce users’ mobility into D2D communication underlaying cellular networks. Maximizing the total end-to-end data rate involves complex tasks, such as resource allocation and routing. By leveraging on the square tessellation technique, we propose an efficient square-division-based resource allocation scheme . Furthermore, we design a relay-assisted D2D communication protocol that addresses the challenges in enabling multihop D2D communications, namely, spectrum resource allocation, users’ mobility, and relay incentive. Through extensive simulations, we show that our relay-assisted D2D communication protocol improves the system throughput up to 55% and the user access rate up to four times in typical scenarios, as compared with state-of-the-art schemes.
- Published
- 2018
16. Toward Cloud-Based Distributed Interactive Applications: Measurement, Modeling, and Analysis
- Author
-
Ryan Shea, Haiyang Wang, Xiaoqiang Ma, Feng Wang, Jiangchuan Liu, Tong Li, and Ke Xu
- Subjects
Computer Networks and Communications ,Computer science ,Broadband networks ,business.industry ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Bottleneck ,Computer Science Applications ,Virtual machine ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Cellular network ,020201 artificial intelligence & image processing ,The Internet ,Electrical and Electronic Engineering ,business ,computer ,Video game ,Software ,Computer network - Abstract
With the prevalence of broadband network and wireless mobile network accesses, distributed interactive applications (DIAs) such as online gaming have attracted a vast number of users over the Internet. The deployment of these systems, however, comes with peculiar hardware/software requirements on the user consoles. Recently, such industrial pioneers as Gaikai, Onlive, and Ciinow have offered a new generation of cloud-based DIAs (CDIAs), which shifts the necessary computing loads to cloud platforms and largely relieves the pressure on individual user’s consoles. In this paper, we aim to understand the existing CDIA framework and highlight its design challenges. Our measurement reveals the inside structures as well as the operations of real CDIA systems and identifies the critical role of cloud proxies. While its design makes effective use of cloud resources to mitigate client’s workloads, it may also significantly increase the interaction latency among clients if not carefully handled. Besides the extra network latency caused by the cloud proxy involvement, we find that computation-intensive tasks (e.g., game video encoding) and bandwidth-intensive tasks (e.g., streaming the game screens to clients) together create a severe bottleneck in CDIA. Our experiment indicates that when the cloud proxies are virtual machines (VMs) in the cloud, the computation-intensive and bandwidth-intensive tasks may seriously interfere with each other. We accordingly capture this feature in our model and present an interference-aware solution. This solution not only smartly allocates workloads but also dynamically assigns capacities across VMs based on their arrival/departure patterns.
- Published
- 2018
17. Smart Home Based on WiFi Sensing: A Survey
- Author
-
Jiangchuan Liu, Chao Cai, Yang Yang, Hongbo Jiang, and Xiaoqiang Ma
- Subjects
IoT ,Authentication ,General Computer Science ,smart home ,business.industry ,Computer science ,SIGNAL (programming language) ,General Engineering ,020206 networking & telecommunications ,02 engineering and technology ,Intelligent sensor ,Gesture recognition ,Home automation ,Software deployment ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Key (cryptography) ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,WiFi sensing ,business ,lcsh:TK1-9971 - Abstract
Conventional sensing methodologies for smart home are known to be labor-intensive and complicated for practical deployment. Thus, researchers are resorting to alternative sensing mechanisms. Wi-Fi is one of the key technologies that enable connectivity for smart home services. Apart from its primary use for communication, Wi-Fi signal has now been widely leveraged for various sensing tasks, such as gesture recognition and fall detection, due to its sensitivity to environmental dynamics. Building smart home based on Wi-Fi sensing is cost-effective, non-invasive, and enjoys convenient deployment. In this paper, we survey the recent advances in the smart home systems based on the Wi-Fi sensing, mainly in such areas as health monitoring, gesture recognition, contextual information acquisition, and authentication.
- Published
- 2018
18. SmartMTra: Robust Indoor Trajectory Tracing Using Smartphones
- Author
-
Hongbo Jiang, Ma Yang, Yanyan Wu, Zhanyong Tang, Xiaojiang Chen, Xiaoqiang Ma, Dingyi Fang, and Pengyan Zhang
- Subjects
Engineering ,Data processing ,business.industry ,media_common.quotation_subject ,010401 analytical chemistry ,Real-time computing ,020206 networking & telecommunications ,02 engineering and technology ,Pedestrian ,Tracing ,01 natural sciences ,Adaptability ,0104 chemical sciences ,Inertial measurement unit ,Phone ,Robustness (computer science) ,Dead reckoning ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,Instrumentation ,media_common - Abstract
Using smartphones for indoor motion trajectory tracing has attracted a lot of attention in recent years, which offers great potential to support a broad spectrum of applications in indoor environment, including elder care, business analysis, and navigation. Yet most existing approaches only work for certain pedestrian’s motion modes or smartphone’s carrying patterns, which lack the robustness and adaptability to general scenarios. In this paper, we propose SmartMTra, a comprehensive, robust, and accurate solution for indoor motion trajectory tracing based on smartphone’s built-in inertial sensors. Through analyzing the data from inertial sensors, we extract a set of features that are found to be highly related to human’s physical activities, which can help to identify motion mode and phone’s carrying pattern through a decomposition model. After that, SmartMTra utilizes the pedestrian dead reckoning technique, which involves estimating step counts, step-length, and heading direction, to achieve accurate trajectory tracing. We have conducted extensive experiments to evaluate the performance of SmartMTra in a campus building, and the results demonstrate the robustness of SmartMTra in various scenarios, as well as the superiority of SmartMTra over the state-of-the-art solutions.
- Published
- 2017
19. Live Broadcast With Community Interactions: Bottlenecks and Optimizations
- Author
-
Zhang Cong, Ryan Shea, Jiangchuan Liu, Xiaoqiang Ma, and Di Fu
- Subjects
Multimedia ,Computer science ,business.industry ,Latency (audio) ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Propagation delay ,Broadcasting ,computer.software_genre ,Computer Science Applications ,Distributed algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Telecommunications ,business ,computer - Abstract
Recent years have witnessed the rapid growth of new live broadcast services, represented by Twitch.tv and YouTube live events, where videos are crowdsourced from amateur users (e.g., game players), rather than from commercial and professional TV broadcaster or content providers. The viewers also actively contribute to the content through embedded open-chat channels. Such community interactions among viewers, or even between broadcasters and viewers, make content generation highly diversified and engaging, particularly for the young generation. In this context, cross-viewer synchronization is highly desirable; otherwise the viewers with shorter broadcast latency may act as spoilers, significantly affecting the user experience of other viewers. In this paper, we show that the end-to-end delay has a dramatically amplified impact on the broadcast latency for individual viewers. We suggest smart rate adaptation to achieve cross-viewer synchronization, and develop distributed algorithms based on dual decomposition. We further extend our solution to the cloud environment, and present the concept of ShadowCast, which moves broadcasters to the cloud to provide high-quality streams beyond broadcasters’ network bandwidth constraint. Its practicability and effectiveness is demonstrated by our implementation and test bed experiments.
- Published
- 2017
20. FRESH: Push the Limit of D2D Communication Underlaying Cellular Networks
- Author
-
Jiangchuan Liu, Yang Yang, Xiaoqiang Ma, Tingwei Liu, and Hongbo Jiang
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Distributed computing ,05 social sciences ,Mobile computing ,050801 communication & media studies ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,Shared resource ,Base station ,0508 media and communications ,0202 electrical engineering, electronic engineering, information engineering ,Cellular network ,Resource allocation ,Resource management ,Electrical and Electronic Engineering ,Radio resource management ,business ,Software ,Computer network ,Power control - Abstract
Device-to-device (D2D) communication has been recently proposed to mitigate the burden of base stations by leveraging the underutilized cellular spectrum resources, where high overall network throughput and D2D access rate are critical for its service performance and availability. In this paper, we study the resource allocation problem to push the limit of D2D communication underlaying cellular networks by allowing multiple D2D links to share resource with multiple cellular links. We propose FRESH, a f ull re source sh aring scheme where each subchannel can be shared by a cellular link and an arbitrary number of D2D links. In particular, FRESH first divides the communication links into so-called full resource sharing sets such that, within each set, all D2D link members are able to reuse the whole allocated resources. Thereafter, it allocates a sum of spectrum resources to each obtained full resource sharing set. As compared with state-of-the-art schemes, FRESH provides fine-grained resource allocation, resulting in throughput improvements of up to one order of magnitude, and D2D access rate improvements of up to 5 times with a moderate node density (e.g., on the order of 1 user per 400 square meters).
- Published
- 2017
21. Fog-Based Transcoding for Crowdsourced Video Livecast
- Author
-
Zhang Cong, Xiaoqiang Ma, Qiyun He, and Jiangchuan Liu
- Subjects
Edge device ,Multimedia ,Computer Networks and Communications ,Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Transcoding ,computer.software_genre ,Computer Science Applications ,World Wide Web ,Synchronization (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quality (business) ,Electrical and Electronic Engineering ,business ,computer ,media_common - Abstract
Recent years have witnessed the booming popularity of CLS platforms, through which numerous amateur broadcasters live stream their video contents to viewers around the world. The heterogeneous qualities and formats of the source streams, however, require massive computational resources to transcode them into multiple industrial standard quality versions to serve viewers with distinct configurations, and the delays to the viewers of different locations should be well synchronized to support community interactions. This article attempts to address these challenges and to explore the opportunities with new generation computation paradigms, in particular, fog computing. We present a novel fog-based transcoding framework for CLS platforms to offload the transcoding workload to the network edge (i.e., the massive number of viewers). We evaluate our design through our PlanetLab-based experiment and real-world viewer transcoding experiment.
- Published
- 2017
22. vLocality: Revisiting Data Locality for MapReduce in Virtualized Clouds
- Author
-
Hongbo Jiang, Jiangchuan Liu, Xiaoyi Fan, Kai Peng, and Xiaoqiang Ma
- Subjects
Distributed database ,Computer Networks and Communications ,business.industry ,Computer science ,Distributed computing ,Locality ,Big data ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Virtualization ,Shared resource ,Data access ,Hardware and Architecture ,020204 information systems ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,business ,computer ,Software ,Information Systems - Abstract
Recent years have witnessed a surge of new generation applications involving big data. The de facto framework for big data processing, MapReduce, has been increasingly embraced by both academic and industrial users. Data locality seeks to co-locate computation with data, which effectively reduces remote data access and improves MapReduce’s performance in physical machine clusters. State-of-the-art public clouds heavily rely on virtualization to enable resource sharing and scaling for massive users, however. In this article, through real-world experiments, we show strong evidence that the conventional notion of data locality is unfortunately not always beneficial for MapReduce in a virtualized environment. The observations suggest that the measure of node-local must be extended to distinguish physical and virtual entities. We develop vLocality, a comprehensive and practical solution for data locality in virtualized environments. It incorporates a novel storage architecture that efficiently mitigates the shared disk contention, and an enhanced task scheduling algorithm that prioritizes co-located VMs. We have implemented a prototype of vLocality based on Hadoop 1.2.1, and have validated its effectiveness on a typical virtualized cloud platform consisting of 22 nodes. Our experimental results demonstrate that vLocality can improve the job finish time to around a quarter of that for typical Hadoop benchmark applications.
- Published
- 2017
23. Resource Allocation for Heterogeneous Applications With Device-to-Device Communication Underlaying Cellular Networks
- Author
-
Xiaoqiang Ma, Jiangchuan Liu, and Hongbo Jiang
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Quality of service ,Mobile broadband ,Distributed computing ,05 social sciences ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Shared resource ,Cellular communication ,0508 media and communications ,Telecommunications link ,0202 electrical engineering, electronic engineering, information engineering ,Cellular network ,Resource allocation ,Resource management ,Electrical and Electronic Engineering ,Radio resource management ,business ,Computer network ,Power control - Abstract
Mobile data traffic has been experiencing a phenomenal rise in the past decade. This ever-increasing data traffic puts significant pressure on the infrastructure of state-of-the-art cellular networks. Recently, device-to-device (D2D) communication that smartly explores local wireless resources has been suggested as a complement of great potential, particularly for the popular proximity-based applications with instant data exchange between nearby users. Significant studies have been conducted on coordinating the D2D and the cellular communication paradigms that share the same licensed spectrum, commonly with an objective of maximizing the aggregated data rate. The new generation of cellular networks, however, have long supported heterogeneous networked applications, which have highly diverse quality-of-service (QoS) specifications. In this paper, we jointly consider resource allocation and power control with heterogeneous QoS requirements from the applications. We closely analyze two representative classes of applications, namely streaming-like and file-sharing-like , and develop optimized solutions to coordinate the cellular and D2D communications with the best resource sharing mode. We further extend our solution to accommodate more general application scenarios and larger system scales. Extensive simulations under realistic configurations demonstrate that our solution enables better resource utilization for heterogeneous applications with less possibility of underprovisioning or overprovisioning.
- Published
- 2016
24. Understanding the YouTube partners and their data: Measurement and analysis
- Author
-
Fatourechi Mehrdad, Xu Cheng, Zhang Cong, Jiangchuan Liu, and Xiaoqiang Ma
- Subjects
Service (business) ,Computer Networks and Communications ,business.industry ,Computer science ,Internet privacy ,Big data ,User-generated content ,Online video ,Popularity ,World Wide Web ,Publishing ,Analytics ,Electrical and Electronic Engineering ,business - Abstract
User generated content, e.g., from YouTube, the most popular online video sharing site, is one of the major sources of today's big data and it is crucial to understand their inherent characteristics. Recently, YouTube has started working with content providers (known as YouTube partners) to promote the users' watching and sharing activities. The substantial benefit is to further augment its service and monetize more videos, which is crucial to both YouTube and its partners, as well as to other providers of relevant services. In this paper, our main contribution is to analyze the massive amounts of video data from a YouTube partner's view. We make effective use of Insight, a new analytics service of YouTube that offers simple data analysis for partners. To provide the practical guidance from the raw Insight data, we enable more complex investigations for the inherent features that affect the popularity of the videos. Our findings facilitate YouTube partners to re-design current video publishing strategies, having more opportunities to attract more views.
- Published
- 2014
25. When mobile terminals meet the cloud: computation offloading as the bridge
- Author
-
Haiyang Wang, Lei Zhang, Yuan Zhao, Xiaoqiang Ma, and Limei Peng
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Distributed computing ,Mobile computing ,Mobile Web ,Cloud computing ,Mobile cloud computing ,Hardware and Architecture ,Cloud testing ,Embedded system ,Computation offloading ,Mobile search ,Mobile technology ,business ,Software ,Information Systems - Abstract
The emergence of cloud computing has been dramatically changing the landscape of services for modern computer applications. Offloading computation to the cloud effectively expands the usability of mobile terminals beyond their physical limits, and also greatly extends their battery charging intervals through potential energy savings. In this article, we present an overview of computation offloading in mobile cloud computing. We identify the key issues in developing new applications that effectively leverage cloud resources for computation-intensive modules, or migrating such modules in existing applications to the mobile cloud. We then analyze two representative applications in detail from both the macro and micro perspectives, cloud-assisted distributed interactive mobile applications and cloud-assisted motion estimation for mobile video compression, to illustrate the unique challenges, benefit, and implementation of computation offloading in mobile cloud computing. We finally summarize the lessons learned and present potential future avenues.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.