466 results on '"Cesar, Pablo"'
Search Results
452. Exploring the Effects of Interactivity in Television Drama
- Author
-
Hand, Stacey, Varan, Duane, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Cesar, Pablo, editor, Chorianopoulos, Konstantinos, editor, and Jensen, Jens F., editor
- Published
- 2007
- Full Text
- View/download PDF
453. Acceptable System Response Times for TV and DVR
- Author
-
Darnell, Michael J., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Cesar, Pablo, editor, Chorianopoulos, Konstantinos, editor, and Jensen, Jens F., editor
- Published
- 2007
- Full Text
- View/download PDF
454. Model-Driven Creation of Staged Participatory Multimedia Events on TV
- Author
-
Van den Bergh, Jan, Bruynooghe, Bert, Moons, Jan, Huypens, Steven, Handekyn, Koen, Coninx, Karin, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Cesar, Pablo, editor, Chorianopoulos, Konstantinos, editor, and Jensen, Jens F., editor
- Published
- 2007
- Full Text
- View/download PDF
455. EPG-Board a Social Application for the OmegaBox Media Center
- Author
-
Iatrino, Arianna, Modeo, Sonia, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Cesar, Pablo, editor, Chorianopoulos, Konstantinos, editor, and Jensen, Jens F., editor
- Published
- 2007
- Full Text
- View/download PDF
456. Human-Centered Design of Interactive TV Games with SMS Backchannel
- Author
-
Reßin, Malte, Haffner, Christoph, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Cesar, Pablo, editor, Chorianopoulos, Konstantinos, editor, and Jensen, Jens F., editor
- Published
- 2007
- Full Text
- View/download PDF
457. Awareness and Conversational Context Sharing to Enrich TV Based Communication
- Author
-
Hemmeryckx-Deleersnijder, Bart, Thorne, Jeremy M., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Cesar, Pablo, editor, Chorianopoulos, Konstantinos, editor, and Jensen, Jens F., editor
- Published
- 2007
- Full Text
- View/download PDF
458. On Fine-grained Temporal Emotion Recognition in Video
- Author
-
Zhang, T., Cesar, Pablo, Hanjalic, A., El Ali, Abdallah, and Delft University of Technology
- Subjects
Machine Learning ,Physiological Signals ,Emotion Recognition ,Video Watching - Abstract
Fine-grained emotion recognition is the process of automatically identifying the emotions of users at a fine granularity level, typically in the time intervals of 0.5s to 4s according to the expected duration of emotions. Previous work mainly focused on developing algorithms to recognize only one emotion for a video based on the user feedback after watching the video. These methods are known as post-stimuli emotion recognition. Compared to post-stimuli emotion recognition, fine-grained emotion recognition can provide segment-by-segment prediction results, making it possible to capture the temporal dynamics of users’ emotions when watching videos. The recognition result it provides can be aligned with the video content and tell us which specific content in the video evokes which emotions. Most of the previous works on fine-grained emotion recognition require fine-grained emotion labels to train the recognition algorithm. However, the experiments to collect these fine-grained emotion labels are usually costly and time-consuming. Thus, this thesis focuses on investigating whether we can accurately predict the emotions of users at a fine granularity level with only a limited amount of emotion ground truth labels for training. We start our technical contribution in Chapter 3 by building up the baseline methods which are trained using fine-grained emotion labels. This can help us understand how accurate the recognition can be if we take advantage of the fine-grained emotion labels. We propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using physiological signals. CorrNet extracts features both inside each fine-grained signal segment (instance) and between different instances for the same video stimuli (correlation-based features). We found out that, compared to sequential learning, correlation-based instance learning offers advantages of higher recognition accuracy, less overfitting and less computational complexity.Compared to collecting fine-grained emotion labels, it is easier to collect only one emotion label after the user watched that stimulus (i.e., the post-stimuli emotion labels). Therefore, in the second technical chapter (Chapter 4) of the thesis, we investigate whether the emotions can be recognized at a fine granularity level by training with only post-stimuli emotion labels (i.e., labels users annotated after watching videos), and propose an Emotion recognition algorithm based on Deep Multiple Instance Learning (EDMIL). EDMIL recognizes fine- grained valence and arousal (V-A) labels by identifying which instances represent the post-stimuli V-A annotated by users after watching the videos. Instead of fully-supervised training, the instances are weakly-supervised by the post-stimuli labels in the training stage. Our experiments show that weakly supervised learning can reduce overfitting caused by the temporal mismatch between fine-grained annotations and input signals.Although the weakly-supervised learning algorithm developed in Chapter 4 can obtain accurate recognition results with only few annotations, it can only identify the annotated (post-stimuli) emotion from the baseline emotion (e.g., neutral) because only post-stimuli labels are used for training. The non-annotated emotions are all categorized as part of the baseline. To overcome this, in Chapter 5, we propose an Emotion recognition algorithm based on Deep Siamese Networks (EmoDSN). EmoDSN recognizes fine-grained valence and arousal (V-A) labels by maximizing the distance metric between signal segments with different V-A labels. According to the experiments we run in this chapter, EmoDSN achieves promising results by using only 5 shots (5 samples in each emotion category) of training data.Reflecting on the achievements reported in this thesis, we conclude that the fully-supervised algorithm (Chapter 3) can result in more accurate fine-grained emotion recognition results if the annotation quantity is sufficient. The weakly-supervised learning method (Chapter 4) can result in better recognition results at the instance level compared to fully-supervised methods. We also found that the weakly-supervised learning methods can perform the best if users annotate their most salient, but short emotions or their overall and longer-duration (i.e., persisting) emotions. The few-shot learning method (Chapter 5) can obtain more emotion categories (more than the weakly-supervised learning) by using less amount of samples for training (better than the fully-supervised learning). However, the limitation of it is that accurate recognition results can only be achieved by a subject-dependent model.
- Published
- 2022
- Full Text
- View/download PDF
459. The Implications Of Program Genres For The Design Of Social Television Systems
- Author
-
David Geerts, Dick C. A. Bulterman, Pablo Cesar, Business Web and Media, Intelligent Information Systems, Cesar, Pablo, Bulterman, Dick, and Distributed and Interactive Systems
- Subjects
Structure (mathematical logic) ,Multimedia ,SOAP ,computer.internet_protocol ,Asynchronous communication ,Computer science ,ComputingMilieux_PERSONALCOMPUTING ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,computer.software_genre ,computer ,Interactive television ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) ,Social television - Abstract
In this paper, we look at how television genres can play a role in the use of social interactive television systems (social iTV). Based on a user study of a system for sending and receiving enriched video fragments to and from a range of devices, we discuss which genres are preferred for talking while watching, talking about after watching and for sending to users with different devices. The results show that news, soap, quiz and sport are genres during which our participants talk most while watching and are thus suitable for synchronous social iTV systems. For asynchronous social iTV systems film, news, documentaries and music programs are potentially popular genres. The plot structure of certain genres influences if people are inclined to talk while watching or not, and to which device they would send a video fragment. We also discuss how this impacts the design and evaluation of social iTV systems. ispartof: pages:71-80 ispartof: ACM International Conference Proceeding Series, Proceeding of the 1st international conference on Designing interactive user experiences for TV and video vol:291 pages:71-80 ispartof: uxTV 2008 location:Mountain View, California date:22 Oct - 24 Oct 2008 status: published
- Published
- 2008
460. The Impact of Bilingualism on Early Literacy and Meta-Cognitive Style
- Author
-
Jensen, Jens F., Cesar, Pablo, Chorianapoulos, Konstantinos, Lugmayr, Artur, and Golebiowski, Piotr
- Published
- 2007
461. Interactive TV: A Shared Experience.:5th European Conference, EuroITV 2007, Amsterdam, the Netherlands, May 24-25, 2007, Proceedings
- Author
-
Chorianopoulos, Konstantinos, Cesar, Pablo, and Jensen, Jens F.
- Published
- 2007
462. Presence and mediated interaction: A means to an end?
- Author
-
Lizzy Bleumers, Tim Van Lier, An Jacobs, Donoso, Veronica, Geerts, David, Cesar, Pablo, Grooff, Dirk De, Studies in Media, Innovation and Technology, and Communication Sciences
- Subjects
Mediated interaction ,Virtual worlds ,Case studies ,presence - Abstract
Promoting a sense of presence is often identified as a prerequisite for mediated interaction. To do so, however, we need a thorough understanding of what presence encompasses and how it can be influenced. The goal of this paper is to elaborate on the different aspects of the sense of presence as identified in the literature, while illustrating whether and how these aspects are promoted in three virtual world cases. We hope to evoke reflection on the link between promoting presence and supporting mediated interaction.
463. Designing and Evaluating a VR Lobby for a Socially Enriching Remote Opera Watching Experience.
- Author
-
Lee S, Viola I, Rossi S, Guo Z, Reimat I, Lawicka K, Striner A, and Cesar P
- Abstract
The latest social VR technologies have enabled users to attend traditional media and arts performances together while being geographically removed, making such experiences accessible despite budget, distance, and other restrictions. In this work, we aim at improving the way remote performances are shared by designing and evaluating a VR theatre lobby which serves as a space for users to gather, interact, and relive the common experience of watching a virtual opera. We conducted an initial test with experts ($\mathrm{N}=10$, i.e., designers and opera enthusiasts) in pairs using our VR lobby prototype, developed based on the theoretical lobby design concept. A unique aspect of our experience is its highly realistic representation of users in the virtual space. The test results guided refinements to the VR lobby structure and implementation, aiming to improve the user experience and align it more closely with the social VR lobby's intended purpose. With the enhanced prototype, we ran a between-subject controlled study ($\mathrm{N}=40$) to compare the user experience in the social VR lobby between individuals and paired participants. To do so, we designed and validated a questionnaire to measure the user experience in the VR lobby. Results of our mixed-methods analysis, including interviews, questionnaire results, and user behavior, reveal the strength of our social VR lobby in connecting with other users, consuming the opera in a deeper manner, and exploring new possibilities beyond what is common in real life. All supplemental materials are available at https://github.com/cwi-dis/IEEEVR2024-VRLobby.
- Published
- 2024
- Full Text
- View/download PDF
464. A real-world dataset of group emotion experiences based on physiological data.
- Author
-
Bota P, Brito J, Fred A, Cesar P, and Silva H
- Subjects
- Humans, Recognition, Psychology, Retrospective Studies, Emotions, Photoplethysmography
- Abstract
Affective computing has experienced substantial advancements in recognizing emotions through image and facial expression analysis. However, the incorporation of physiological data remains constrained. Emotion recognition with physiological data shows promising results in controlled experiments but lacks generalization to real-world settings. To address this, we present G-REx, a dataset for real-world affective computing. We collected physiological data (photoplethysmography and electrodermal activity) using a wrist-worn device during long-duration movie sessions. Emotion annotations were retrospectively performed on segments with elevated physiological responses. The dataset includes over 31 movie sessions, totaling 380 h+ of data from 190+ subjects. The data were collected in a group setting, which can give further context to emotion recognition systems. Our setup aims to be easily replicable in any real-life scenario, facilitating the collection of large datasets for novel affective computing systems., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
465. Is that My Heartbeat? Measuring and Understanding Modality-Dependent Cardiac Interoception in Virtual Reality.
- Author
-
El Ali A, Ney R, van Berlo ZMC, and Cesar P
- Subjects
- Humans, Heart Rate physiology, Awareness, Computer Graphics, Interoception physiology, Illusions physiology
- Abstract
Measuring interoception ('perceiving internal bodily states') has diagnostic and wellbeing implications. Since heartbeats are distinct and frequent, various methods aim at measuring cardiac interoceptive accuracy (CIAcc). However, the role of exteroceptive modalities for representing heart rate (HR) across screen-based and Virtual Reality (VR) environments remains unclear. Using a PolarH10 HR monitor, we develop a modality-dependent cardiac recognition task that modifies displayed HR. In a mixed-factorial design (N=50), we investigate how task environment (Screen, VR), modality (Audio, Visual, Audio-Visual), and real-time HR modifications (±15%, ±30%, None) influence CIAcc, interoceptive awareness, mind-body measures, VR presence, and post-experience responses. Findings showed that participants confused their HR with underestimates up to 30%; environment did not affect CIAcc but influenced mind-related measures; modality did not influence CIAcc, however including audio increased interoceptive awareness; and VR presence inversely correlated with CIAcc. We contribute a lightweight and extensible cardiac interoception measurement method, and implications for biofeedback displays.
- Published
- 2023
- Full Text
- View/download PDF
466. Towards socialVR: evaluating a novel technology for watching videos together.
- Author
-
Montagud M, Li J, Cernigliaro G, El Ali A, Fernández S, and Cesar P
- Abstract
Social VR enables people to interact over distance with others in real-time. It allows remote people, typically represented as avatars, to communicate and perform activities together in a shared virtual environment, extending the capabilities of traditional social platforms like Facebook and Netflix. This paper explores the benefits and drawbacks provided by a lightweight and low-cost Social VR platform ( SocialVR ), in which users are captured by several cameras and reconstructed in real-time. In particular, the paper contributes with (1) the design and evaluation of an experimental protocol for Social VR experiences; (2) the report of a production workflow for this new type of media experiences; and (3) the results of experiments with both end-users ( N = 15 pairs ) and professionals ( N = 22 companies ) to evaluate the potential of the SocialVR platform. Results from the questionnaires and semi-structured interviews show that end-users rated positively towards the experiences provided by the SocialVR platform, which enabled them to sense emotions and communicate effortlessly. End-users perceived the photo-realistic experience of SocialVR similar to face-to-face scenarios and appreciated this new creative medium. From a commercial perspective, professionals confirmed the potential of this communication medium and encourage further research for the adoption of the platform in the commercial landscape., Supplementary Information: The online version contains supplementary material available at 10.1007/s10055-022-00651-5., (© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022.)
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.