66 results on '"Live video"'
Search Results
2. Optimizing QoE and Latency of Live Video Streaming Using Edge Computing and In-Network Intelligence
- Author
-
Alireza Erfanian
- Subjects
transrating ,Live video ,HLS ,video streaming ,Multimedia ,Computer science ,media_common.quotation_subject ,Latency (audio) ,DASH ,Transcoding ,transcoding ,computer.software_genre ,SDN ,NFV ,Resource (project management) ,edge computing ,HAS ,Bandwidth (computing) ,live video streaming ,Network intelligence ,Quality (business) ,computer ,Edge computing ,media_common - Abstract
Nowadays, HTTP Adaptive Streaming (HAS) has become the de-facto standard for delivering video over the Internet. More users have started generating and delivering high-quality live streams (usually 4K resolution) through popular online streaming platforms, resulting in a rise in live streaming traffic. Typically, the video contents are generated by streamers and watched by many audiences, geographically distributed in various locations far away from the streamers. The resource limitation in the network (e.g., bandwidth) is a challenging issue for network and video providers to meet the users’ requested quality. This dissertation leverages edge computing capabilities and in-network intelligence to design, implement, and evaluate approaches to optimize Quality of Experience (QoE) and end-to-end (E2E) latency of live HAS. In addition, improving transcoding performance and optimizing the cost of running live HAS services and the network’s backhaul utilization are considered. Motivated by the mentioned issue, the dissertation proposes five contributions in two classes: optimizing resource utilization and light-weight transcoding.Optimizing resource utilization: This class consists of two contributions, ORAVA and OSCAR. They leverage in-network intelligence paradigms, i.e., edge computing, Network Function Virtualization (NFV), and Software Defined Networking (SDN) to introduce two types of Virtual Network Functions (VNFs): Virtual Reverse Proxy (VRP) and Virtual Transcoder Functions (VTFs). At the network’s edge, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller. The SDN controller then creates a multicast tree from the origin server to the optimal set of VTFs, delivering only the highest requested bitrate to elevate the efficiency of resource allocation. The selected VTFs transcode the received segmentto the requested bitrate and transmit it to the corresponding VRPs. The problem of determining multicast tree(s) and selecting VTFs has been formulated as a Mixed Integer Linear Programming (MILP) optimization problem, aiming to minimize the streaming cost and resource utilization while considering delay constraints. 1. ORAVA: It presents a cost-aware approach to provide Advanced Video Coding (AVC)-based real-time video streaming services in the network. It transmits the generated bitrates from VTFs to corresponding VRPs in a unicast manner.2. OSCAR: It extends ORAVA by introducing a new SDN-based live video streaming approach. Instead of unicast transmission, it streams requested bitrates from VTFs to VRPs in a multicast manner, resulting in lower bandwidth consumption. It is also able to use VTFs with different types of virtual machine instances (i.e., CPU or memory resources) to reduce the total service cost. According to evaluation results, ORAVA and OSCAR save up to 65% bandwidth compared to state-of-the-art approaches; furthermore, they reduce the number of generated OpenFlow (OF) commands by up to 78% and 82%, respectively.Light-weight transcoding: This class consists of three contributions, named LwTE, CD-LwTE, and LwTE-Live. Employing edge computing and NFV, they introduce a novel transcoding approach that significantly saves transcoding time and cost.3. LwTE: It introduces a novel Light-weight Transcoding approach at the Edge in the context of HAS. During the encoding process of a video segment at the origin side, computationally intense search processes are going on. It stores the optimal results of these search processes as metadata for each video bitrate and reuses them at the edge server to reduce the required time and computational resources for transcoding. It applies a store policy on popular segments/bitratesto cache them at the edge, and a transcode policy on unpopular ones that stores the highest bitrate plus corresponding metadata (of very small size).4. CD-LwTE: This contribution extends the investigation on LwTE by proposing Cost- and Delay-aware Light-weight Transcoding at the Edge. As an extension, it introduces resource constraints at the edge and considers a new policy (i.e., fetch policy) for serving requests at the edge. In the same direction, it also adds serving delay to the objective of selecting an appropriate policy for each segment/bitrate, aiming to minimize the total cost and serving delay.5. LwTE-Live: It investigates the cost efficiency of LwTE in the context of live HAS. It utilizes the LwTE approach to save bandwidth in the backhaul network, which may become a bottleneck in live video streaming.The evaluation results show that LwTE does the transcoding processes at least 80% faster than the conventional transcoding method. By adding new features in the metadata, CD-LwTE reduces the transcoding time by up to 97%. Moreover, it decreases the streaming costs, including storage, computation, and bandwidth costs, by up to 75%, and reduces delay by up to 48% compared to state-of-the-art approaches., Alireza Erfanian, Dissertation Universität Klagenfurt 2023
- Published
- 2021
3. Performance of Low-Latency HTTP-based Streaming Players
- Author
-
Thiago Teixeira, Bo Zhang, and Yuriy Reznik
- Subjects
Dynamic Adaptive Streaming over HTTP ,Live video ,Computer science ,business.industry ,Encoding (memory) ,Latency (audio) ,Network conditions ,Latency (engineering) ,business ,Live streaming ,Encoder ,Computer network - Abstract
Reducing end-to-end streaming latency is critical to HTTP-based live video streaming. There are currently two technologies in this domain: Low-Latency HTTP Live Streaming (LL-HLS) and Low-Latency Dynamic Adaptive Streaming over HTTP (LL-DASH). Many players support LL-HLS and/or LL-DASH protocols, including Apple's AVPlayer, Shaka player, HLS.js Dash.js, and others. This paper is dedicated to the analysis of the performance of low-latency players and streaming protocols. The evaluation is based on a series of live streaming experiments, repeated using identical video content, encoders, encoding profiles, and network conditions, emulated by using traces of real-world networks. Several performance metrics, such as average stream bitrate, the amounts of downloaded media data, streaming latency, as well as buffering and stream switching statistics are captured and reported in our experiments. These results are subsequently used to describe the observed differences in the performance of LL-HLS and LL-DASH-based players.
- Published
- 2021
4. Popularity-based transcoding workload allocation for improving video quality in live streaming systems
- Author
-
Dayoung Lee and Minseok Song
- Subjects
Live video ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Workload ,Data_CODINGANDINFORMATIONTHEORY ,Transcoding ,computer.software_genre ,Video quality ,Popularity ,Live streaming ,Server ,InformationSystems_MISCELLANEOUS ,business ,computer ,Computer network ,Communication channel - Abstract
Transcoding is essential for live video streaming systems, but video streams in many channels may not be transcoded due to the processing capacity limitations of transcoding servers with multiple CPU nodes. To address this, we propose an algorithm that determines the transcodeable bitrate versions for each channel and allocates the transcoding tasks, by taking into account video quality, popularity, and workload balancing. Simulation results show that it can improve popularity-weighted video quality by up to 10.7% over other popularity-based alternatives.
- Published
- 2020
5. From Do You See What I See? to Do You Control What I See? Mediated Vision, From a Distance, for Eyewear Users
- Author
-
Radu-Daniel Vatavu, Adrian Aiordachioae, and Cristian Pamparău
- Subjects
Live video ,Multimedia ,Eyewear ,Computer science ,05 social sciences ,Control (management) ,Wearable computer ,Video camera ,Field of view ,computer.software_genre ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,law ,030221 ophthalmology & optometry ,0501 psychology and cognitive sciences ,Video streaming ,computer ,ComputingMilieux_MISCELLANEOUS ,050107 human factors - Abstract
We discuss engineering aspects for shifting from “do you see what I see?” applications that stream the user’s field of view to remote viewers toward “do you control what I see?” features in which remote viewers are given the opportunity and tool to control the primary user’s field of view. To this end, we present two applications for (1) smartglasses with embedded video camera for live video streaming and (2) the HoloLens HMD that presents users with mediated versions of the visual world controlled by remote viewers.
- Published
- 2020
6. A Preliminary Study of Emotional Contagion in Live Streaming
- Author
-
Jiajing Guo and Susan R. Fussell
- Subjects
Text chat ,Live video ,business.industry ,InformationSystems_INFORMATIONSYSTEMSAPPLICATIONS ,Internet privacy ,Sentiment analysis ,Emotional contagion ,business ,Psychology ,Live streaming - Abstract
Live streaming is an increasingly popular communication medium that allows real-time interaction among a broadcaster and an audience of any size. Using archived YouTube live video transcripts and associated live chat messages, we find evidence for emotional contagion in live streams: sentiment in live video oral transcripts and viewers? text chat is associated with the sentiment in subsequent viewers? comments. This relationship is stronger between viewers? chat messages and the subsequent chat than between the oral messages in the video and the subsequent chat. However, in some types of live streams, negative sentiment in the live video is followed by less negative chat. We conclude with a discussion of future research and potential uses of the dataset.
- Published
- 2020
7. VideoIC: A Video Interactive Comments Dataset and Multimodal Multitask Learning for Comments Generation
- Author
-
Jieting Chen, Weiying Wang, and Qin Jin
- Subjects
Live video ,Information retrieval ,Relation (database) ,Computer science ,media_common.quotation_subject ,Multi-task learning ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Multimodal interaction ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Video streaming ,Scale (map) ,Function (engineering) ,0105 earth and related environmental sciences ,media_common - Abstract
Live video interactive commenting, a.k.a. danmaku, is an emerging social feature on online video sites, which involves rich multimodal information interaction among viewers. In order to support various related research, we build a large scale video interactive comments dataset called VideoIC, which consists of 4951 videos spanning 557 hours and 5 million comments. Videos are collected from popular categories on the 'Bilibili' video streaming website. Comparing to other existing danmaku datasets, our VideoIC contains richer and denser comments information, with 1077 comments per video on average. High comment density and diverse video types make VideoIC a challenging corpus for various research such as automatic video comments generation. We also propose a novel model based on multimodal multitask learning for comment generation (MML-CG), which integrates multiple modalities to achieve effective comment generation and temporal relation prediction. A multitask loss function is designed to train both tasks jointly in the end-to-end manner. We conduct extensive experiments on both VideoIC and Livebot datasets. The results prove the effectiveness of our model and reveal some features of danmaku.
- Published
- 2020
8. Interactive style transfer to live video streams
- Author
-
Michal Kučera, Ondřj Texler, David Futschik, Daniel Syýkora, Šárka Sochorová, Menglei Chai, Ondřej Jamriška, and Sergey Tulyakov
- Subjects
Style (visual arts) ,Painting ,Live video ,Multimedia ,On the fly ,Computer science ,Transfer (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,computer.software_genre ,computer ,GeneralLiterature_MISCELLANEOUS ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Our tool allows artists to create living paintings or stylize a live video stream using their own artwork with minimal effort. While an artist is painting the image, our framework learns their artistic style on the fly and transfers it to the provided live video stream in real time.
- Published
- 2020
9. Real-time multi-user spatial collaboration using ARCore
- Author
-
Dongxing Cao
- Subjects
Live video ,Annotation ,Work (electrical) ,Computer science ,Human–computer interaction ,Novelty ,Augmented reality ,Multi-user - Abstract
This paper proposes a collaboration application that allows multiuser to add extra contents to live video streaming, based on augmented reality annotation in real-time. Compared to the previous work, we think the integration of remote collaboration and a co-located collaborative way is one of the novelty points of the proposed application. The AR-based collaborative system can render annotations directly on an environment which helps local users easily recognize the original intention that the remote helper wants to deliver. We introduce how the application work.
- Published
- 2020
10. A cloud-based end-to-end server-side dynamic ad insertion platform for live content
- Author
-
Tankut Akgul, Alihan Iplik, and Samet Ozcan
- Subjects
Live video ,Fully automated ,End-to-end principle ,business.industry ,Computer science ,Cloud computing ,The Internet ,business ,Digital signal processing ,Server-side ,Personalization ,Computer network - Abstract
In this paper, we present a cloud-based live video streaming and advertising platform solution that enables internet-based live broadcasts for TV channels. The platform supports server-side dynamic ad insertion with automated ad detection and personalized ad placement. A unique feature of our solution is an interactive personalized single ad that can be inserted at desired locations in the live stream independent of broadcaster's commercial break period, which increases ad viewability up to 95% and completion rate up to 97% on average. The platform also provides management interfaces both for the broadcaster as well as for the advertisement agencies enabling fully automated programmatic TV ads.
- Published
- 2020
11. Be Part Of It: Spectator Experience in Gaming and Esports
- Author
-
Günter Wallner, Pejman Mirza-Babaei, Sven Charleer, Steven Schirra, Manfred Tscheligi, Kathrin Gerling, and Simone Kriglstein
- Subjects
Live video ,Phenomenon ,05 social sciences ,ComputingMilieux_PERSONALCOMPUTING ,0202 electrical engineering, electronic engineering, information engineering ,Media studies ,020207 software engineering ,0501 psychology and cognitive sciences ,02 engineering and technology ,Sociology ,050107 human factors ,Subject matter - Abstract
With rapid advances in streaming technology and the rise of esports, spectating other people playing video games has become a mass phenomenon. Today, both live video game streaming and esports are a booming business attracting million of viewers. This offers an opportunity for Human-Computer Interaction (HCI) research to explore how to support spectator experiences. This workshop aims to foster discussion on how technology and HCI can help to transform the act of spectating games and particularly esports from a passive (watching) to a more active - and engaging - experience. Through this workshop we aim to explore opportunities for research, promote interdisciplinary exchange, increase awareness, and establish a community on the subject matter.
- Published
- 2020
12. Revealing Donation Dynamics in Social Live Video Streaming
- Author
-
Hongru Li, Ran Tian, Siqi Shen, Adele Lu Jia, and Yuanxing Rao
- Subjects
Live video ,Atmosphere (unit) ,Work (electrical) ,Social phenomenon ,Dynamics (music) ,business.industry ,Donation ,Phenomenon ,Internet privacy ,Business ,GeneralLiterature_MISCELLANEOUS - Abstract
Social live video streaming has become a global economic and social phenomenon with the rise of platforms like Facebook-Live, Youtube-Live, and Twitch. The phenomenon of user donation in these communities is rapidly emerging, towards which however we have very limited understandings. In this preliminary work, we reveal the dynamics of user donations based on a publicly available (anonymized) dataset with detailed information on over 2 million users and worth in total over 200 million US dollars. Among other results, we find that (i) both the donations received and the donations made are highly skewed, (ii) user donation is strongly correlated with the atmosphere (the volume and the sentiment of real-time user chats) and in the long run, the loss of broadcasters, and (iii) donors are loyal and very generous to their favorite broadcasters while in the mean time they also support others moderately. Our findings represent a first step towards understanding user donations which will shed lights on the donor retention problem and the design of social live video streaming services.
- Published
- 2020
13. Live Video Streaming Optimization Based on Deep Reinforcement Learning
- Author
-
Yuxiang Hu, Xueshuai Zhang, and Ziyong Li
- Subjects
Live video ,Point (typography) ,business.industry ,Computer science ,media_common.quotation_subject ,Real-time computing ,020206 networking & telecommunications ,02 engineering and technology ,User experience design ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Quality (business) ,Quality of experience ,business ,TRACE (psycholinguistics) ,media_common - Abstract
Video players employ adaptive bitrate algorithms in video-on-demand (VoD) scenarios to improve user-perceived quality of experience (QoE), whereas performance will obviously decline in live video streaming scenarios. To this end, we propose a novel deep reinforcement learning (DRL) based live video streaming optimization approach. Firstly, we point out the optimization objectives by comparing the difference between the VoD scenario and the live video streaming scenario. Then, according to the optimization conditions, we establish QoE optimization model in combination with a state-of-the-art DRL algorithm. We compare our algorithm with state-of-the-art ABR algorithms in a simulator with real-world video and network trace. Simulation results show that the proposed algorithm improves user experience quality by 5.6% on average, compared with existing algorithms.
- Published
- 2020
14. Predicting Traffic Accidents with Event Recorder Data
- Author
-
Hiroyuki Toda, Maya Okawa, Yusuke Tanaka, Takimoto Yoshiaki, Takeshi Kurashima, and Shuhei Yamamoto
- Subjects
Hazard (logic) ,Location data ,Live video ,Recurrent neural network ,Traffic accident ,Event (computing) ,Computer science ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Dashboard ,Data mining ,computer.software_genre ,computer ,Occurrence time - Abstract
Large amounts of data on accidents are continually being collected by dashboard cameras (dashcams). In this paper, we address the problem of predicting the occurrence of accidents: Our goal is to predict when accidents will occur based on stored dashcam data and analysis of live video streams. We propose a survival analysis model for predicting the event occurrence time. The occurrence of accidents involves changes in the situation of own car and surroundings. Therefore, the hazard function of the proposed model is modeled by a convolutional recurrent neural network that can capture it from high-dimensional time-series information, i.e., video. Another characteristic of our model is its incorporation of location data because how likely the events are to occur strongly depends on location. Our model can predict accidents by simultaneously considering video and location data. Experiments on real-world event recorder data show that our model can more accurately predict accident occurrences than baseline models.
- Published
- 2019
15. 4G-based Remote Manual Control for Unmanned Surface Vehicles
- Author
-
Kai Yan, Yong Yue, Supeng Kong, and Xiaohui Zhu
- Subjects
Flexibility (engineering) ,Live video ,Unmanned surface vehicle ,Computer science ,law ,Transmitter ,Real-time computing ,Control (management) ,Remote control ,Android app ,law.invention - Abstract
Remote manual control for USVs is essential when a USV fails to navigate or traps into a complicated river. Conventional transmitters and receivers are widely used to provide the utility of remote manual control for USVs. However, constrained by the power of transmitter, the communication distance between the USV and the user is limited. In this paper, a 4G-based remote control system for USVs is proposed. With the help of live video from a camera deployed on the USV, users can remotely control the USV using an Android APP and the 4G network. Compared to traditional remote control methods, our approach dramatically extends the control distance and improves the control flexibility of USVs. The approach is applied to a USV for water quality monitoring. Experimental results show that it has small communication delay and can remotely control the USV during the navigation.
- Published
- 2019
16. Documenting Physical Objects with Live Video and Object Detection
- Author
-
Laurent Denoue, Daniel Avrahami, and Scott Carter
- Subjects
Live video ,Documentation ,Human–computer interaction ,Computer science ,Cognitive neuroscience of visual object recognition ,State (computer science) ,Object detection - Abstract
Responding to requests for information from an application, a remote person, or an organization that involve documenting the presence and/or state of physical objects can lead to incomplete or inaccurate documentation. We propose a system that couples information requests with a live object recognition tool to semi-automatically catalog requested items and collect evidence of their current state.
- Published
- 2019
17. BitLat
- Author
-
Tongtong Feng, Chen Wang, Tengfei Cao, Neng Zhang, and Jianfeng Guan
- Subjects
Live video ,Artificial neural network ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,02 engineering and technology ,Quality of experience ,Latency (engineering) ,Algorithm - Abstract
With the growing popularity and prosperity of living streaming applications, it is naturally confronting users' quality of experience (QoE) degradation issues especially under dynamic environments arised from nonnegligible factors such as high latency and intermittent bitrate. In this paper, we propose an efficient adaptive bitrate (ABR) algorithm called BitLat to achieve both bitrate-control and latency-control. BitLat is based on reinforcement learning to get strong adaptability for dealing with the complex and changing network conditions. More specifically, in our work, we determine the specific value of latency threshold with the help of current advanced algorithm, and design the structure of the neural network in reinforcement learning, the features used in the training process, and the corresponding reward function. Additional, we use the Dynamic Reward Method to further enhance the performance. Comprehensive experiments are conducted to demonstrate BitLat outperforms the state-of-the-art ABR algorithms, with improvements in average QoE of 20%-62%.
- Published
- 2019
18. HD3
- Author
-
Xiaolan Jiang and Yusheng Ji
- Subjects
Live video ,Computer science ,Real-time computing ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020206 networking & telecommunications ,02 engineering and technology ,Video quality ,Live streaming - Abstract
Live streaming applications are becoming increasingly popular recently, and it exposes new technical challenges compared to regular video streaming. High video quality and low latency are two main requirements in live streaming scenarios. A live streaming application needs to make bitrate and target buffer level decisions as well as sets a continuous latency limit value to skip video frames. We formulate the live streaming task as a reinforcement learning problem with discrete-continuous hybrid action spaces, then propose a novel deep reinforcement learning (DRL) algorithm HD3 which can take hybrid actions to solve it. We compare HD3 with several state-of-the-art DRL algorithms on various network environments, and the simulation results show that HD3 can outperform all the other comparison schemes. We emphasize that HD3 generates a single agent which can perform well on different network conditions and video scenes.
- Published
- 2019
19. The ACM Multimedia 2019 Live Video Streaming Grand Challenge
- Author
-
Abdelhak Bentaleb, Wei Tsang Ooi, Kai Zheng, Yi Li, Gang Yi, Weihua Li, Jiangchuan Liu, Yong Cui, and Dan Yang
- Subjects
020203 distributed computing ,Live video ,Multimedia ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,computer - Abstract
Live video streaming delivery over Dynamic Adaptive Video Streaming (DASH) is challenging as it requires low end-to-end latency, is more prone to stall, and the receiver has to decide online which representation at which bitrate to download and whether to adjust the playback speed to control the latency. To encourage the research community to come together to address this challenge, we organize the Live Video Streaming Grand Challenge at ACM Multimedia 2019. This grand challenge provides a simulation platform onto which the participants can implement their adaptive bitrate (ABR) logic and latency control algorithm, and then benchmark against each other using a common set of video traces and network traces. The ABR algorithms are evaluated using a common Quality-of- Experience (QoE) model that accounts for playback bitrate, latency constraint, frame-skipping penalty, and rebuffering penalty.
- Published
- 2019
20. Continuous Bitrate & Latency Control with Deep Reinforcement Learning for Live Video Streaming
- Author
-
Jing Wang, Lei Zhang, Qiwei Shen, and Ruying Hong
- Subjects
Live video ,Computer science ,Real-time computing ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,Quality of experience ,Latency (engineering) - Abstract
In this paper, we introduce a continuous bitrate control and latency control model for the Live Video Streaming Challenge. Our model is based on Deep Deterministic Policy Gradient, popular on continuous control tasks. Simultaneously, it can take a fine-grained control through continuous control and does not need to discrete the continuous "latency limit", which is a buffer threshold to minimize end-to-end delay by frame skipping. In all considered live video scenarios, our model can provide a better quality of experience with improvements in average QoE of 3.6% than DQN which discrete the "latency limit". Additionally, challenge results show the effectiveness and applicability of the proposed model, which achieved top performance in 3 different networks that include high, low and oscillating throughput, and ranked the second place in the network with medium throughput.
- Published
- 2019
21. S.Wing
- Author
-
Auður Anna Jónsdóttir, Linda Ng Boyle, Haena Kim, and Jai Shankar
- Subjects
Service (business) ,050210 logistics & transportation ,Live video ,business.industry ,Computer science ,05 social sciences ,Certification ,Modular design ,Computer security ,computer.software_genre ,Flow network ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,law ,030225 pediatrics ,On demand ,0502 economics and business ,Seat belt ,Distracted driving ,business ,computer - Abstract
This video demonstrates the proposed Super Wing (S.Wing), a self-driving modular transportation system that provides children the opportunity to travel by themselves. The safety of children in vehicles is often threatened by distracted driving behaviors. With the on-demand S.Wing system, a child can travel by themselves based on the mutual needs of the parents and the child. S.Wing includes a supervision service by having a certified attendant with the child on-board. This ensures that the child's seat belt is safely fastened and help can be provided immediately. Modular S.Wing pod is ordered by an adult through a mobile application and offers the opportunity for the adult to monitor and interact with their child via live video streaming. The system has the potential to improve children's safety while they are on the road and allow them to engage in social and school-related activities without relying on adults for transportation.
- Published
- 2019
22. L3VTP
- Author
-
Weihua Li, Mowei Wang, Dan Yang, Gang Yi, Yong Cui, and Yi Li
- Subjects
Live video ,Transmission (telecommunications) ,Computer science ,Real-time computing ,Latency (engineering) - Published
- 2019
23. A measurement study of YouTube 360° live video streaming
- Author
-
Jun Yi, Zhisheng Yan, and Shiqing Luo
- Subjects
Live video ,Multimedia ,business.industry ,Computer science ,05 social sciences ,1080p ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Broadcasting ,Virtual reality ,computer.software_genre ,Dynamic Adaptive Streaming over HTTP ,0508 media and communications ,Measurement study ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Bandwidth (computing) ,business ,computer - Abstract
360° live video streaming is becoming increasingly popular. While providing viewers with enriched experience, 360° live video streaming is challenging to achieve since it requires a significantly higher bandwidth and a powerful computation infrastructure. A deeper understanding of this emerging system would benefit both viewers and system designers. Although prior works have extensively studied regular video streaming and 360° video on demand streaming, we for the first time investigate the performance of 360° live video streaming. We conduct a systematic measurement of YouTube's 360° live video streaming using various metrics in multiple practical settings. Our key findings suggest that viewers are advised not to live stream 4K 360° video, even when dynamic adaptive streaming over HTTP (DASH) is enabled. Instead, 1080p 360° live video can be played smoothly. However, the extremely large one-way video delay makes it only feasible for delay-tolerant broadcasting applications rather than real-time interactive applications. More importantly, we have concluded from our results that the primary design weakness of current systems lies in inefficient server processing, non-optimal rate adaptation, and conservative buffer management. Our research insight will help to build a clear understanding of today's 360° live video streaming and lay a foundation for future research on this emerging yet relatively unexplored area.
- Published
- 2019
24. Red5 network
- Author
-
Davide Lucchi and Chris Allen
- Subjects
Live video ,Real time video ,business.industry ,Computer science ,Video streaming ,Latency (engineering) ,Encryption ,business ,WebRTC ,Computer network - Abstract
Real time video streaming solutions have seen an exponential growth over the past years, with their main limit being the high costs associated to the amount of outgoing bandwidth required to serve high quality content to millions of users. This demo paper presents Red5 Network, a decentralized live video streaming platform which lowers the cost of bandwidth while guaranteeing as close to real-time end-to-end latency as possible and the delivery of encrypted streams. Costs are lowered by using resources shared by users and managed through a blockchain. The Red5 Network demo demonstrates that it can create a peer-to-peer network between the shared resources to deliver live streams with sub 500ms end-to-end latency while compensating the users proportionally to the resources that they provide.
- Published
- 2019
25. LIME
- Author
-
Bo Han, Matteo Varvello, Xing Liu, and Feng Qian
- Subjects
Live video ,Multimedia ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,Network conditions ,computer.software_genre ,computer ,Pipeline (software) ,Live streaming - Abstract
Personalized live video streaming is an increasingly popular technology that allows a broadcaster to share videos in real time with worldwide viewers. Compared to video-on-demand (VOD) streaming, experimenting with personalized live video streaming is harder due to its intrinsic live nature, the need for worldwide viewers, and a more complex data collection pipeline. In this paper, we make several contributions to both experimenting with and understanding today's commercial live video streaming services. First, we develop LIME (Live video MEasurement platform), a generic and holistic system allowing researchers to conduct crowd-sourced measurements on both commercial and experimental live streaming platforms. Second, we use LIME to perform, to the best of our knowledge, a first study of personalized 360° live video streaming on two commercial platforms, YouTube and Facebook. During a 7-day study, we have collected a dataset from 548 paid Amazon Mechanical Turk viewers from 35 countries who have watched more than 4,000 minutes of 360° live videos. Using this unique dataset, we characterize 360° live video streaming performance in the wild. Third, we conduct controlled experiments through LIME to shed light on how to make 360° live streaming (more) adaptive in the presence of challenging network conditions.
- Published
- 2019
26. Spotility
- Author
-
Jens Herder, Bektur Ryskeldiev, Yoichi Ochiai, Toshiharu Igarashi, Michael Cohen, and Junjian Zhang
- Subjects
Live video ,Collaborative software ,SIMPLE (military communications protocol) ,business.industry ,Computer science ,05 social sciences ,Novelty ,Mobile computing ,020207 software engineering ,02 engineering and technology ,Space (commercial competition) ,Mixed reality ,Feature (computer vision) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,business ,050107 human factors - Abstract
Live video streaming is becoming increasingly popular as a form of interaction in social applications. One of its main advantages is an ability to immediately create and connect a community of remote users on the spot. In this paper we discuss how this feature can be used for crowdsourced completion of simple visual search tasks (such as finding specific objects in libraries and stores, or navigating around live events) and social interactions through mobile mixed reality telepresence interfaces. We present a prototype application that allows users to create a mixed reality space with a photospherical imagery as a background and interact with other connected users through viewpoint, audio, and video sharing, as well as realtime annotations in mixed reality space. Believing in the novelty of our system, we conducted a short series of interviews with industry professionals on the possible applications of our system. We discuss proposed use-cases for user evaluation, as well as outline future extensions of our system.
- Published
- 2018
27. Impact factors on live video streams for mobile devices
- Author
-
Daniel Cunha da Silva, Pedro B. Velloso, and Antonio A. de A. Rocha
- Subjects
Live video ,business.product_category ,Multimedia ,business.industry ,Computer science ,media_common.quotation_subject ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Live streaming ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,Internet access ,020201 artificial intelligence & image processing ,The Internet ,Quality (business) ,Video streaming ,business ,computer ,Mobile device ,media_common - Abstract
Mobile live streaming is expected to increase significantly in the coming years. However, general changes in infrastructure, Internet access, and user devices may have created a gap between the current understanding of the transmission and the real problems, considering that video streaming is one of the most popular Internet application. This work proposes, from the log files of a large CDN, to evaluate which factors most influence the quality of transmissions of mobile users in popular live video transmissions.
- Published
- 2018
28. Congestion Control for Future Mobile Networks
- Author
-
Weiguang Wang, Itzcak Pechtalt, Marco Dias Silva, Kevin Smith, Simone Mangiante, Brighten Godfrey, Amit Navon, and Michael Schapira
- Subjects
Live video ,Computer science ,business.industry ,020206 networking & telecommunications ,02 engineering and technology ,Field tests ,TCP congestion-avoidance algorithm ,Network congestion ,Software deployment ,Protocol design ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Cellular network ,Quality of experience ,business ,Computer network - Abstract
The complexity and volatility of emerging mobile networks, which are intended to support extremely demanding applications such as high-resolution live video and AR/VR, pose immense challenges for congestion control. We present MORC, a novel rate-control protocol for mobile networks. MORC builds on the PCC protocol design framework to strike a balance between low latency and high throughput. Lab trials and early field tests show that MORC outperforms traditional TCP congestion control and the recent BBR protocol, achieving faster file download times, higher resiliency to network changes, better bandwidth utilization, and improved quality of experience for video clients. We discuss deployment scenarios and future research.
- Published
- 2018
29. The Shared Individual
- Author
-
Stefan Stanisic, Asreen Rostami, and Emma Bexell
- Subjects
Live video ,Point (typography) ,Computer science ,05 social sciences ,sync ,02 engineering and technology ,Human–computer interaction ,020204 information systems ,Synchronization (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Video streaming ,Performing arts ,050107 human factors - Abstract
The Shared Individual is a live collaborative Mixed-Reality Performance in which a group of audience members can observe themselves from an individual's point of view. In this performance, a performer shares her view with audience members by wearing a head-mounted camera and steaming live video. By wearing a head-mounted display audience members can see themselves and follow performer's instruction to 'occupy' her body and become her. This instruction, in the form of performance, is designed to help the audience to sync with the performer in three different stages: visual synchronization, physical synchronization and emotional synchronization.
- Published
- 2018
30. Remote controlled human navigational assistance for the blind using intelligent computing
- Author
-
K. M. Anand kumar, Akhilesh Krishnan, Deepakraj G, and N Nishanth
- Subjects
Microcontroller ,Live video ,Walking stick ,Intelligent computing ,Human–computer interaction ,law ,Phone ,Computer science ,Mobile computing ,Remote control ,law.invention - Abstract
The smart walking stick helps visually challenged people to reach their destination with the help of humans, guiding them remotely. A lot of work and research is being done to find ways to improve life for visually challenged people. There are multiple walking sticks and devices which help the user to move around- indoor and outdoor locations, but none of them provide run time autonomous navigation with direct human assistance. Blind Assistance through Remote Control (BARC) is a device which provides aid to the visually challenged, by humans, through a web platform from anywhere. An image sensor mounted on the BARC transmits live video to the volunteer's phone which helps the volunteer to control the stick remotely and navigate the blind to the destination. A web platform is an intermediary between the visually challenged and the volunteers. It is used by the blind to send requests and find volunteers willing to help them navigate to their destination. BARC is a passive intelligent stick which combines mobile computing along with hardware support such as micro controller, image sensor, and ultrasonic sensors to navigate the visually challenged to their destination.
- Published
- 2017
31. You as a Puppet
- Author
-
Amy Koike, Ippei Suzuki, Tatsuya Minagawa, Yoichi Ochiai, Keisuke Kawahara, and Mose Sakashita
- Subjects
Live video ,Multimedia ,Computer science ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,ComputingMilieux_PERSONALCOMPUTING ,Optical head-mounted display ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,GeneralLiterature_MISCELLANEOUS ,User studies ,Difficulty carrying ,Puppetry ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Animatronics ,User interface ,Performing arts ,computer ,050107 human factors ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We propose an immersive telepresence system for puppetry that transmits a human performer's body and facial movements into a puppet with audiovisual feedback to the performer. The cameras carried in place of puppet's eyes stream live video to the HMD worn by the performer, so that performers can see the images from the puppet's eyes with their own eyes and have a visual understanding of the puppet's ambience. In conventional methods to manipulate a puppet (a hand-puppet, a string-puppet, and a rod-puppet), there is a need to practice manipulating puppets, and there is difficulty carrying out interactions with the audience. Moreover, puppeteers must be positioned exactly where the puppet is. The proposed system addresses these issues by enabling a human performer to manipulate the puppet remotely using his or her body and facial movements. We conducted several user studies with both beginners and professional puppeteers. The results show that, unlike the conventional method, the proposed system facilitates the manipulation of puppets especially for beginners. Moreover, this system allows performers to enjoy puppetry and fascinate audiences.
- Published
- 2017
32. FARM 2017 performances
- Author
-
Alex McLean
- Subjects
Live video ,Functional programming ,Multimedia ,media_common.quotation_subject ,Live music ,Art ,Live coding ,computer.software_genre ,Generative art ,computer ,GeneralLiterature_MISCELLANEOUS ,Visual arts ,media_common - Abstract
A concert of performances in the FARM workshop tradition, taking place 9th September 2017 in the Old Fire Station, Oxford. The performers will all use functional programming and related techniques, to create live music and visuals. We introduce this evening by describing the ideas and technologies behind the performances, together with biographies of the artists involved.
- Published
- 2017
33. Delivery of Live Watermarked Video in CDN
- Author
-
Patrick Maillé, Gwendal Simon, and Kun He
- Subjects
Live video ,business.industry ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,Scalable algorithms ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,Leak detection ,business ,Digital watermarking ,Computer network - Abstract
To address the problem of illegal re-streaming of video streams, existing solutions are based on watermarking the legal video to track the leak users who re-stream the stream on illegal platform. However, these solutions do not aim at tracking leaks as fast as possible, nor are adaptive to the number of users. We present a CDN-based adaptive delivery architecture for watermarked streaming. We propose an algorithm to generate unique sequences of watermarks for the legal delivery. This algorithm is adaptive in the number of users and optimal for the time needed to detect the leak. It meets the demand of live video providers who do not know in advance the number of clients for a stream. Our algorithm copes with thousands of new clients per seconds and enables leak detection in less than five minutes with only five watermarks for live video streams watched by one billion of clients.
- Published
- 2017
34. Demo
- Author
-
Peter Bodik, Lenin Ravindranath, Matthai Philipose, and Paramvir Bahl
- Subjects
Live video ,Multimedia ,Point (typography) ,Computer science ,business.industry ,Volume (computing) ,Usability ,computer.software_genre ,World Wide Web ,Transmission (telecommunications) ,Surveillance camera ,business ,computer ,Home security ,Communication channel - Abstract
Live streaming is an increasingly popular way to broadcast videos ranging from formal news channels to kitten cams to home security camera feeds. Live streaming marries the rich detail of video with the timeliness of live transmission and the ease of use of consumer cameras, thus promising to vastly increase the amount of detailed, up-to-the minute information available about the real world. The volume of potentially interesting footage brings up the question of how end-users can avoid being glued to one (or worse, many) streams of videos waiting for events of interest. In this demo, we present Lookout, a system that allows users to register standing queries, called triggers over live video streams. Lookout then notifies the user when events of interest to them occur in their streams of interest. For example, a user could point to a cat cam and write a trigger that sends a notification when the cat wakes up and starts moving. Users can also write triggers to look for certain news being covered in a live new channel, a gamer moving to a certain level in a Twitch stream, a stranger showing up in a outdoor surveillance camera, etc.
- Published
- 2017
35. Live Room Merger
- Author
-
Ching-Chi Lin, Shih-Kai Lin, Liang-Chi Tseng, Chien-Min Wang, Hsuan-Chi Kuo, Yu-Ju Tsai, Chu-I Chao, and Da-Fang Chang
- Subjects
Live video ,Panorama ,Computer science ,business.industry ,Headset ,02 engineering and technology ,Observer (special relativity) ,3D modeling ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Augmented reality ,Structured model ,business - Abstract
A real-time augmented reality system is built to replace the background of the user's room (the 'observer room' or 'local room') with a 360-degree live video of another room (the 'remote room'). The user can see the merged room captured by an RGBD camera mounted on the VR headset. A 360-degree image of the remote room is converted into a simple box-like room structure model in real time. The model is loaded into Unity and replaces the background of the observer room, with the result then displayed in a VR head-mount device.
- Published
- 2017
36. Video Streaming Over Publish/Subscribe
- Author
-
Lincoln David Nery e Silva and João Martins de Oliveira Neto
- Subjects
Live video ,business.industry ,Computer science ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Middleware (distributed applications) ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Mobile technology ,Video streaming ,business ,computer ,Publication ,Computer network - Abstract
Mobile technologies have created a lot of challenges for distributed systems over the last decade. Intermitent connections and weak network signals can induce unnexpected behaviour in some applications. One kind of application that suffer from these difficulties is video streaming. This paper investigates the use of SDDL, a publish/subscribe middleware based on DDS, for live video streaming. Since SDDL is designed for scalable communications in a dynamic environment, we believe the proposed solution is fit for use in mobile networks.
- Published
- 2016
37. Multimodal feedback for finger-based interaction in mobile augmented reality
- Author
-
Hürst, W.O., Vriens, Kevin, Sub Multimedia, and Multimedia
- Subjects
Live video ,Handheld augmented reality ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,multimodal feedback ,Phone ,AR interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Augmented reality ,Computer vision ,Artificial intelligence ,Graphics ,business ,050107 human factors ,High potential ,Handheld AR ,Gesture ,Haptic technology - Abstract
Mobile or handheld augmented reality uses a smartphone's live video stream and enriches it with superimposed graphics. In such scenarios, tracking one's fingers in front of the camera and interpreting these traces as gestures offers interesting perspectives for interaction. Yet, the lack of haptic feedback provides challenges that need to be overcome. We present a pilot study where three types of feedback (audio, visual, haptic) and combinations thereof are used to support basic finger-based gestures (grab, release). A comparative study with 26 subjects shows an advantage in providing combined, multimodal feedback. In addition, it suggests high potential of haptic feedback via phone vibration, which is surprising given the fact that it is held with the other, non-interacting hand.
- Published
- 2016
38. Motion Sickness Prevention System (MSPS)
- Author
-
Martin Steiner, Michael Miksch, Alexander Meschtscherjakov, and Markus Miksch
- Subjects
030506 rehabilitation ,Live video ,Computer science ,media_common.quotation_subject ,05 social sciences ,Transparent display ,medicine.disease ,03 medical and health sciences ,Motion sickness ,Staring ,Mobile phone ,Reading (process) ,Trajectory ,medicine ,0501 psychology and cognitive sciences ,Movement (clockwork) ,0305 other medical science ,050107 human factors ,Simulation ,media_common ,Cognitive psychology - Abstract
Travel sickness, which is a special kind of motion sickness or kinetosis, is unpleasant for many people while traveling. It is often caused by a mismatch between the perceived movement of the eyes and the movement sensed by the vestibular system. In a car, this condition is often caused by reading or staring at a mobile phone for a longer period of time. Looking outside the window at the trajectory of the vehicle helps most of those affected by kinetosis. But what if you want to read a book while traveling? In this paper, we present the Motion Sickness Prevention System (MSPS) --an approach that allows reading while driving easing the symptoms of kinetosis. It simply uses the live video stream of the road ahead as a background for reading. A preliminary study (N=12) shows that the MSPS, while not able to eliminate the symptoms of kinetosis, can still significantly decrease it.
- Published
- 2016
39. Impact of Access Line Capacity on Adaptive Video Streaming Quality - A Passive Perspective
- Author
-
Trevisan, Martino, Drago, Idilio, Mellia, Marco, Trevisan, Martino, Drago, Idilio, and Mellia, Marco
- Subjects
Live Video ,QoE-Metrics ,Access Line Capacity ,QoE-Metric - Abstract
Adaptive streaming over HTTP is largely used to deliver live and on-demand video. It works by adjusting video quality according to network conditions. While QoE for different streaming services has been studied, it is still unclear how access line capacity impacts QoE of broadband users in video sessions. We make a first step to answer this question by characterizing parameters influencing QoE, such as frequency of video adaptations. We take a passive point of view, and analyze a dataset summarizing video sessions of a large population for one year. We first split customers based on their estimated access line capacity. Then, we quantify how the latter affects QoE metrics by parsing HTTP requests of Microsoft Smooth Streaming (MSS) services. For selected services, we observe that at least 3~Mbps of downstream capacity is needed to let the player select the best bitrate, while at least 6~Mbps are required to minimize delays to retrieve initial fragments. Surprisingly, customers with faster access lines obtain limited benefits, hinting to restrictions on the design of services.
- Published
- 2016
40. LiveTraj
- Author
-
Richard T. B. Ma, Tom Z. J. Fu, Zhenjie Zhang, Yong Pei, Jianbing Ding, Marianne Winslett, Bingbing Ni, and Yin Yang
- Subjects
Live video ,business.product_category ,Computer science ,business.industry ,Video tracking ,Internet access ,Computer vision ,Cloud computing ,Tracking system ,Artificial intelligence ,business - Abstract
We present LiveTraj, a novel system for tracking trajectories in a live video stream in real time, backed by a cloud platform. Although trajectory tracking is a well-studied topic in computer vision, so far most attention has been devoted to improving the accuracy of trajectory tracking, rather than the efficiency. To our knowledge, LiveTraj is the first that achieves real-time efficiency in trajectory tracking, which can be a key enabler in many important applications such as video surveillance, action recognition and robotics. LiveTraj is based on a state-of-the-art approach to (offline) trajectory tracking; its main innovation is to adapt this base solution to run on an elastic cloud platform to achieve real-time tracking speed at an affordable cost. The video demo shows the offline base solution and LiveTraj side by side, both running on a video stream containing human actions. Besides demonstrating the real-time efficiency of LiveTraj, our video demo also exhibits important system parameters to the audience such as latency and cloud resource usage for different components of the system. Further, if the conference venue provides sufficiently fast Internet connection to our cloud platform, we also plan to demonstrate LiveTraj on-site, during which we will show LiveTraj identifying and tracking trajectories from a live video stream captured by a camera.
- Published
- 2015
41. An AR Interface to Enable Real-time Preview Design Variations in Actual Environment
- Author
-
Shin Takahashi, Jiro Tanaka, and Akira Iwaya
- Subjects
Live video ,Point (typography) ,Computer science ,Human–computer interaction ,Metaphor ,Interface (computing) ,Computer graphics (images) ,media_common.quotation_subject ,Augmented reality ,Mobile device ,Impression ,media_common - Abstract
In this paper, we propose a way to enable users to preview a modified version of objects in the real world with a mobile device's screen using techniques of augmented reality with live video. Here, we applied the methodology to develop a prototype system and an interface that enables users to modify fonts of designs of a poster put in an actual environment and preview it to reduce the problem referred to as "impression inconsistency." From another point of view, this system uses an "interaction through video" metaphor. Tani et al. devised a technique to remotely operate machines by manipulating a live video image on a computer screen. Boring et al. applied it to distant large displays and mobile devices. Our system provides interaction with static, unintelligent targets such as posters and signs through live video.
- Published
- 2015
42. Practical, Real-time Centralized Control for CDN-based Live Video Delivery
- Author
-
Dongsu Han, Srinivasan Seshan, Hui Zhang, Matthew K. Mukerjee, Junchen Jiang, and David Naylor
- Subjects
Live video ,Average bitrate ,User experience design ,Computer Networks and Communications ,business.industry ,Computer science ,The Internet ,Routing control plane ,business ,Software ,Computer network - Abstract
Live video delivery is expected to reach a peak of 50 Tbps this year. This surging popularity is fundamentally changing the Internet video delivery landscape. CDNs must meet users' demands for fast join times, high bitrates, and low buffering ratios, while minimizing their own cost of delivery and responding to issues in real-time. Wide-area latency, loss, and failures, as well as varied workloads ("mega-events" to long-tail), make meeting these demands challenging. An analysis of video sessions concluded that a centralized controller could improve user experience, but CDN systems have shied away from such designs due to the difficulty of quickly handling failures, a requirement of both operators and users. We introduce VDN, a practical approach to a video delivery network that uses a centralized algorithm for live video optimization. VDN provides CDN operators with real-time, fine-grained control. It does this in spite of challenges resulting from the wide-area (e.g., state inconsistency, partitions, failures) by using a hybrid centralized+distributed control plane, increasing average bitrate by 1.7x and decreasing cost by 2x in different scenarios.
- Published
- 2015
43. AnnoScape
- Author
-
Austin S. Lee, Sheng Kai Tang, Hiroshi Ishii, Hiroshi Chigira, and Kojo Acquah
- Subjects
Thesaurus (information retrieval) ,Live video ,Multimedia ,Computer science ,Human–computer interaction ,Video overlay ,Data space ,Overlay ,computer.software_genre ,computer ,Virtual workspace ,Gesture - Abstract
We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system's virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape's potential to facilitate effective communication during remote 3D data reviews.
- Published
- 2014
44. LiveNature
- Author
-
Mudassar Ahmad Mughal and Jinyi Wang
- Subjects
Live video ,Projection screen ,Multimedia ,Weather condition ,Camera angle ,Computer science ,computer.software_genre ,computer ,Visualization - Abstract
LiveNature is an interactive system that intends to connect people with their remote cherished places. This connection is realized by streaming live videos and collecting weather sensor data from the users' cherished places, and presenting the video mixed with weather visualization in their homes in a decorative and aesthetic manner. The user chooses one of the live videos displayed in picture frames to active it in a projection screen and control its visual effects influenced by real time weather sensor data. These effects change constantly in response to external factors, such as camera angle, sunlight direction and weather condition. This system can enrich the sense of a cherished place and encourage ludic experiences by enabling the user to improvise the visualization of their cherished places in real time.
- Published
- 2014
45. Sky writer
- Author
-
Robin R. Murphy, Brittany A. Duncan, Zachary Henkel, and Jesus Suarez
- Subjects
Live video ,Web browser ,Multimedia ,Computer science ,Interface (computing) ,Design process ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Cognitive effort ,computer.software_genre ,computer ,Motion (physics) ,Sketch ,Cognitive load - Abstract
Sky Writer is a collaborative communication medium that augments the traditional display of a UAV pilot and allows other stakeholders to communicate their needs and intentions to the pilot. UAV pilots engaging in time-critical missions, such as urban disaster responses, often must allocate most of their cognitive capacity towards flight tasks, making communication and collaboration with other stakeholders difficult or dangerous. Sky Writer addresses the needs of stakeholders while requiring minimal cognitive effort from the UAV pilot. The application presents stakeholders with an interface that provides contextual flight information and a live video stream of the flight. Stakeholders are able to sketch directly on the video stream or use a spotlight indicator that is mirrored across all displays in the system, including the pilot's display. The application can be used in any modern web browser and works with traditional and touch devices. Concept experimentation performed at Disaster City with two pilots indicated that the spotlight feature was particularly useful while the UAV was in motion, and the sketching features were most useful while the UAV was stationary. The system will be tested with professional responders soon to determine its efficacy in a simulated response, and to inform the ongoing design process.
- Published
- 2014
46. Scheduling of access points for multiple live video streams
- Author
-
I-Hong Hou and Rahul Singh
- Subjects
Live video ,Network packet ,Computer science ,Wireless network ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Real-time computing ,Wireless ,Video streaming ,STREAMS ,business ,Scheduling (computing) ,Computer network - Abstract
This paper studies the problem of serving multiple live video streams to several different clients from a single access point over unreliable wireless links, which is expected to be major a consumer of future wireless capacity. This problem involves two characteristics. On the streaming side, different video streams may generate variable-bit-rate traffic with different traffic patterns. On the network side, the wireless transmissions are unreliable, and the link qualities differ from client to client. In order to alleviate the above stochastic aspects of both video streams and link unreliability, each client typically buffers incoming packets before playing the video. The quality of the video playback subscribed to by each flow depends, among other factors, on both the delay of packets as well as their throughput. In this paper we address how to schedule packets at the access point to satisfy the joint per-packet-delay-throughput performance measure. We test the designed policy on the traces of three movies. From our tests, it appears to outperform other policies by a large margin.
- Published
- 2013
47. OpenCL - OpenGL ES interop
- Author
-
Adrian Bucur
- Subjects
Live video ,Computer science ,business.industry ,Embedded system ,Operating system ,Process (computing) ,Opengl es ,computer.software_genre ,business ,computer ,Mobile device - Abstract
Smart-phones and tablets have become high-performance mobile devices that package a great deal of computing power. In today's devices, GPUs are part of the integrated system-on-a-chip hardware (application processor) which offers tight integration and increased communication between all the system's hardware components (CPUs, DSPs, etc.). The purpose of this presentation is to highlight the advantages of using the GPU as a computational device and explain the process of effectively using/connecting the OpenCL and OpenGL ES APIs to do high performance visual data processing on the GPU.
- Published
- 2013
48. Clearing the virtual window
- Author
-
Jonna Häkkilä, Ashley Colley, Maaret Posti, Olli Koskenranta, and Leena Ventä-Olkkonen
- Subjects
User studies ,Live video ,Multimedia ,Salient ,Human–computer interaction ,Computer science ,Clearing ,Window (computing) ,Public displays ,computer.software_genre ,computer ,Field (computer science) ,Gesture - Abstract
Public displays offer the possibility to open a virtual window to another place by showing a live video feed from a remote location. In this paper, we describe our research investigating connecting two spaces with pervasive displays, where the ability to see through the virtual window was user controlled. The set-up was designed to resemble a frozen window, where the user was able to melt the surface using gesture input. We organized a four day field study with four alternating designs to evaluate our system, and collected feedback from 14 users through online surveys and focus groups. Our salient findings reveal that Ice Window was perceived as fun and interesting, and it has potential for facilitate awareness and informal ways of collaboration not only between the two locations, but also at one side of the display. People were most comfortable with a design that implemented two-sided melting of the ice. This was perceived as best able to indicate communication attempts between the two locations whilst respecting privacy.
- Published
- 2013
49. HomeProxy
- Author
-
Gina Venolia, Aaron Hoff, Patrick Therien, Robert Xiao, John C. Tang, and Asta Roseway
- Subjects
Live video ,User experience design ,Home environment ,Multimedia ,business.industry ,Computer science ,Video chat ,business ,computer.software_genre ,computer ,Proxy (climate) - Abstract
HomeProxy is a research prototype that uses a physical proxy to support video messaging at home among distributed family members. A physical artifact dedicated to remote family members makes it easier to chat with them over video. HomeProxy combines a form factor designed for the home environment with a "no-touch" user experience and an interface that quickly transitions between recorded and live video communication. We designed and implemented a prototype and our early experiences with it indicate the promise of offering quick video messaging at home and the challenges of a no-touch interface.
- Published
- 2013
50. Foreground segmentation for interactive displays
- Author
-
Michael McGee and Richard Green
- Subjects
Live video ,business.industry ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Fidelity ,Mixture model ,Interactive displays ,Computer Science::Computer Vision and Pattern Recognition ,Sandbox (computer security) ,Segmentation ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
We describe a method for segmenting the foreground of a live video stream. We use two Gaussian mixture models, combining cues from color and depth images to provide a high fidelity foreground estimation. This estimation is then used to power a sandbox game on an interactive public display. Our method achieves an accuracy of 90% in a variety of conditions.
- Published
- 2012
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.