18 results on '"Rafael Pagés"'
Search Results
2. Samuel Beckett in Virtual Reality: Exploring Narrative Using Free Viewpoint Video
- Author
-
Jan Ondřej, Rafael Pagés, Konstantinos Amplianitis, Néill O’Dwyer, David S. Monaghan, Nicholas Johnson, Aljosa Smolic, and Enda Bates
- Subjects
Visual Arts and Performing Arts ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,Art ,Virtual reality ,Computer Science Applications ,Visual arts ,Presentation ,Interactive narrative ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Narrative ,Engineering (miscellaneous) ,Music ,media_common - Abstract
This article describes an investigation of interactive narrative in virtual reality (VR) through Samuel Beckett's theatrical text Play. Actors are captured in a green screen environment using free-viewpoint video (FVV). Built in a game engine, the scene is complete with binaural spatial audio and six degrees of freedom of movement. The project explores how ludic qualities in the original text elicit the conversational and interactive specificities of the digital medium. The work affirms potential for interactive narrative in VR, opens new experiences of the text and highlights the reorganization of the author-audience dynamic.
- Published
- 2021
3. A Self-regulating Spatio-Temporal Filter for Volumetric Video Point Clouds
- Author
-
Matthew Moynihan, Aljosa Smolic, and Rafael Pagés
- Subjects
Upsampling ,Noise ,Hausdorff distance ,Computer science ,business.industry ,Noise reduction ,Point cloud ,Coherence (signal processing) ,Computer vision ,Filter (signal processing) ,Artificial intelligence ,business ,Projection (set theory) - Abstract
The following work presents a self-regulating filter that is capable of performing accurate upsampling of dynamic point cloud data sequences captured using wide-baseline multi-view camera setups. This is achieved by using two-way temporal projection of edge-aware upsampled point clouds while imposing coherence and noise filtering via a windowed, self-regulating noise filter. We use a state of the art Spatio-Temporal Edge-Aware scene flow estimation to accurately model the motion of points across a sequence and then, leveraging the spatio-temporal inconsistency of unstructured noise, we perform a weighted Hausdorff distance-based noise filter over a given window. Our results demonstrate that this approach produces temporally coherent, upsampled point clouds while mitigating both additive and unstructured noise. In addition to filtering noise, the algorithm is able to greatly reduce intermittent loss of pertinent geometry. The system performs well in dynamic real world scenarios with both stationary and non-stationary cameras as well as synthetically rendered environments for baseline study.
- Published
- 2020
4. Augmenting Hand-Drawn Art with Global Illumination Effects through Surface Inflation
- Author
-
Sebastian Lutz, Aljosa Smolic, Matis Hudon, and Rafael Pagés
- Subjects
Artificial neural network ,Creatures ,Global illumination ,Computer science ,business.industry ,Line drawings ,020207 software engineering ,02 engineering and technology ,Animation ,Single view ,Normal mapping ,0202 electrical engineering, electronic engineering, information engineering ,Look and feel ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
We present a method for augmenting hand-drawn characters and creatures with global illumination effects. Given a single view drawing only, we use a novel CNN to predict a high-quality normal map of the same resolution. The predicted normals are then used as guide to inflate a surface into a 3D proxy mesh visually consistent and suitable to augment the input 2D art with convincing global illumination effects while keeping the hand-drawn look and feel. Along with this paper, a new high resolution dataset of line drawings with corresponding ground-truth normal and depth maps will be shared. We validate our CNN, comparing our neural predictions qualitatively and quantitatively with the recent state-of-the art, show results for various hand-drawn images and animations, and compare with alternative modeling approaches.
- Published
- 2019
5. Affordable content creation for free-viewpoint video and VR/AR applications
- Author
-
Rafael Pagés, Konstantinos Amplianitis, Aljosa Smolic, Jan Ondřej, and David S. Monaghan
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Content creation ,Virtual reality ,Pipeline (software) ,View synthesis ,Visualization ,Computer graphics (images) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Augmented reality ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Pose ,Mobile device ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a scalable pipeline for Free-Viewpoint Video (FVV) content creation, considering also visualisation in Augmented Reality (AR) and Virtual Reality (VR). We support a range of scenarios where there may be a limited number of handheld consumer cameras, but also demonstrate how our method can be applied in professional multi-camera setups. Our novel pipeline extends many state-of-the-art techniques (such as structure-from-motion, shape-from-silhouette and multi-view stereo) and incorporates bio-mechanical constraints through 3D skeletal information as well as efficient camera pose estimation algorithms. We introduce multi-source shape-from-silhouette (MS-SfS) combined with fusion of different geometry data as crucial components for accurate reconstruction in sparse camera settings. Our approach is highly flexible and our results indicate suitability either for affordable content creation for VR/AR or for interactive FVV visualisation where a user can choose an arbitrary viewpoint or sweep between known views using view synthesis.
- Published
- 2018
6. Spatio-temporal Upsampling for Free Viewpoint Video Point Clouds
- Author
-
Rafael Pagés, Aljosa Smolic, and Matthew Moynihan
- Subjects
Upsampling ,business.industry ,Computer science ,Point cloud ,Computer vision ,Artificial intelligence ,business - Published
- 2019
7. 2D shading for cel animation
- Author
-
Mairéad Grogan, Matis Hudon, Aljosa Smolic, Jan Ondřej, and Rafael Pagés
- Subjects
business.product_category ,business.industry ,Computer science ,Interface (computing) ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Self-shadowing ,020207 software engineering ,Usability ,02 engineering and technology ,Image-based modeling and rendering ,Cel ,Simplicity (photography) ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Sensory cue ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a semi-automatic method for creating shades and self-shadows in cel animation. Besides producing attractive images, shades and shadows provide important visual cues about depth, shapes, movement and lighting of the scene. In conventional cel animation, shades and shadows are drawn by hand. As opposed to previous approaches, this method does not rely on a complex 3D reconstruction of the scene: its key advantages are simplicity and ease of use. The tool was designed to stay as close as possible to the natural 2D creative environment and therefore provides an intuitive and user-friendly interface. Our system creates shading based on hand-drawn objects or characters, given very limited guidance from the user. The method employs simple yet very efficient algorithms to create shading directly out of drawn strokes. We evaluate our system through a subjective user study and provide qualitative comparison of our method versus existing professional tools and state of the art.
- Published
- 2018
8. Beckett in VR
- Author
-
David S. Monaghan, Aljosa Smolic, Rafael Pagés, Konstantinos Amplianitis, Nicholas Johnson, Jan Ondřej, Néill O’Dwyer, and Enda Bates
- Subjects
Reinterpretation ,Reflection on practice ,Game engine ,Movement (music) ,06 humanities and the arts ,02 engineering and technology ,Virtual reality ,Interactive narrative ,Human–computer interaction ,060402 drama & theater ,0202 electrical engineering, electronic engineering, information engineering ,Six degrees of freedom ,020201 artificial intelligence & image processing ,Narrative ,Sociology ,0604 arts - Abstract
This poster describes a reinterpretation of Samuel Beckett's theatrical text Play for virtual reality (VR). It is an aesthetic reflection on practice that follows up an a technical project description submitted to ISMAR 2017 [O'Dwyer et al. 2017]. Actors are captured in a green screen environment using free-viewpoint video (FVV) techniques, and the scene is built in a game engine, complete with binaural spatial audio and six degrees of freedom of movement. The project explores how ludic qualities in the original text help elicit the conversational and interactive specificities of the digital medium. The work affirms the potential for interactive narrative in VR, opens new experiences of the text, and highlights the reorganisation of the author-audience dynamic.
- Published
- 2018
9. Jonathan Swift: Augmented Reality Application for Trinity Library’s Long Room
- Author
-
Konstantinos Amplianitis, Rafael Pagés, Aljosa Smolic, Jan Ondřej, and Néill O’Dwyer
- Subjects
Swift ,Process (engineering) ,Computer science ,020207 software engineering ,06 humanities and the arts ,02 engineering and technology ,060401 art practice, history & theory ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Narrative ,Augmented reality ,Affordance ,computer ,Mobile device ,0604 arts ,computer.programming_language - Abstract
This demo paper describes a project that engages cutting-edge free viewpoint video (FVV) techniques for developing content for an augmented reality prototype. The article traces the evolutionary process from concept, through narrative development, to completed AR prototypes for the HoloLens and handheld mobile devices. It concludes with some reflections on the affordances of the various hardware formats and posits future directions for the research.
- Published
- 2018
10. Virtual Play in Free-Viewpoint Video: Reinterpreting Samuel Beckett for Virtual Reality
- Author
-
Konstantinos Amplianitis, David S. Monaghan, Néill O’Dwyer, Nicholas Johnson, Aljosa Smolic, Rafael Pagés, Enda Bates, and Jan Ondrej
- Subjects
Multimedia ,business.industry ,Computer science ,Interface (Java) ,020207 software engineering ,02 engineering and technology ,Virtual reality ,computer.software_genre ,Digital media ,Digital art ,Aesthetics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Augmented reality ,Performing arts ,business ,computer ,Bespoke ,Storytelling - Abstract
Since the early years of the twenty-first century, the performing arts have been party to an increasing number of digital media projects that bring renewed attention to questions about, on one hand, new working processes involving capture and distribution techniques, and on the other hand, how particular works—with bespoke hard and software—can exert an efficacy over how work is created by the artist/producer or received by the audience. The evolution of author/audience criteria demand that digital arts practice modify aesthetic and storytelling strategies, to types that are more appropriate to communicating ideas over interactive digital networks, wherein AR/VR technologies are rapidly becoming the dominant interface. This project explores these redefined criteria through a reimagining of Samuel Becketts Play (1963) for digital culture. This paper offers an account of the working processes, the aesthetic and technical considerations that guide artistic decisions and how we attempt to place the overall work in the state of the art.
- Published
- 2017
11. Simulation framework for a 3-D high-resolution imaging radar at 300 GHz with a scattering model based on rendering techniques
- Author
-
Rafael Pagés, Federico Garcia-Rial, Luis Ubeda-Medina, Guillermo Ortiz-Jimenez, Jesus Grajal, and Narciso Garcia
- Subjects
Telecomunicaciones ,Radiation ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,01 natural sciences ,law.invention ,Rendering (computer graphics) ,010309 optics ,Inverse synthetic aperture radar ,Radar engineering details ,Automatic target recognition ,law ,Radar imaging ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Ray tracing (graphics) ,Bidirectional reflectance distribution function ,Electrical and Electronic Engineering ,Radar ,Remote sensing - Abstract
We present a simulation framework for a 3-D high-resolution imaging radar at 300 GHz with mechanical scanning. This tool allows us to reproduce the imaging capabilities of the radar in different setups and with different targets. The simulations are based on a ray-tracing approximation combined with a bidirectional reflectance distribution function (BRDF) model for the scattering of rough surfaces. Moreover, we present a novel approach to estimate the scattering parameters of the BRDF model for different types of targets from the combination of the radar data and information obtained from an infrared structure light sensor. This new framework will serve as a baseline for the design of future radar multistatic configurations and to generate synthetic data to train automatic target recognition algorithms.
- Published
- 2017
12. Seamless, Static Multi-Texturing of 3D Meshes
- Author
-
Narciso Garcia, Rafael Pagés, Daniel Berjón, and Francisco Morán
- Subjects
Texture atlas ,business.industry ,Computer science ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Silhouette ,Visual hull ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Computer vision ,Polygon mesh ,Artificial intelligence ,Shading ,business ,Texture mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In the context of 3D reconstruction, we present a static multi-texturing system yielding a seamless texture atlas calculated by combining the colour information from several photos from the same subject covering most of its surface. These pictures can be provided by shooting just one camera several times when reconstructing a static object, or a set of synchronized cameras, when dealing with a human or any other moving object. We suppress the colour seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by colour blending techniques. Our system is robust enough to compensate for the almost inevitable inaccuracies of 3D meshes obtained with visual hull-based techniques: errors in silhouette segmentation, inherently bad handling of concavities, etc.
- Published
- 2014
13. Automatic system for virtual human reconstruction with 3D mesh multi-texturing and facial enhancement
- Author
-
Rafael Pagés, Daniel Berjón, and Francisco Morán
- Subjects
Novel technique ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer graphics (images) ,Signal Processing ,Fully automatic ,Computer vision ,Polygon mesh ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,Projection (set theory) ,Facial region ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Structured light ,Virtual actor - Abstract
The present paper presents a fully automatic low-cost system for generating animatable and statically multi-textured avatars of real people captured with several standard cameras. Our system features a novel technique for generating view-independent texture atlases computed from the original images, and two proposals for improving the quality of the facial region of the 3D mesh: a purely passive one implying no additional cost, and another based on active techniques such as structured light projection.
- Published
- 2013
14. Textured splat-based point clouds for rendering in handheld devices
- Author
-
Francisco Morán, Sergio García, Daniel Berjón, and Rafael Pagés
- Subjects
Texture atlas ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,Cloud computing ,Opengl es ,Ellipse ,GeneralLiterature_MISCELLANEOUS ,Rendering (computer graphics) ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,Shader ,Mobile device ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We propose a novel technique for modeling and rendering a 3D point cloud obtained from a set of photographs of a real 3D scene as a set of textured elliptical splats. We first obtain the base splat model by calculating, for each point of the cloud, an ellipse approximating locally the underlying surface. We then refine the base model by removing redundant splats to minimize overlaps, and merging splats covering flat regions of the point cloud into larger ellipses. We later apply a multi-texturing process to generate a single texture atlas from the set of photographs, by blending information from multiple cameras for every splat. Finally, we render this multi-textured, splat-based 3D model with an efficient implementation of OpenGL ES 2.0 vertex and fragment shaders which guarantees its fluid display on handheld devices.
- Published
- 2015
15. SPLASH
- Author
-
Sergio García, Rafael Pagés, Daniel Berjón, and Francisco Morán
- Subjects
Telecomunicaciones ,0209 industrial biotechnology ,Splash ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,020207 software engineering ,02 engineering and technology ,3D modeling ,Rendering (computer graphics) ,020901 industrial engineering & automation ,Robustness (computer science) ,Computer graphics (images) ,Triangle mesh ,0202 electrical engineering, electronic engineering, information engineering ,Structure from motion ,Polygon mesh ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We propose a hybrid 3D modeling and rendering approach called SPLASH to combine the modeling flexibility and robustness of SPLAts together with the rendering simplicity and maturity of meSHes. Together with this novel SPLASH concept, we also propose a system turning a 3D point cloud, obtained for example through an SfM (Structure from Motion) approach, into a multi-textured hybrid 3D model whose shape is described by a triangle mesh plus a collection of elliptical splats.
- Published
- 2015
16. 3D facial merging for virtual human reconstruction
- Author
-
Rafael Pagés and Francisco Morán
- Subjects
Telecomunicaciones ,Computer science ,business.industry ,3D reconstruction ,Robótica e Informática Industrial ,020207 software engineering ,02 engineering and technology ,Iterative reconstruction ,Facial recognition system ,Visual hull ,law.invention ,Projector ,law ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Polygon mesh ,Artificial intelligence ,business ,Structured light ,Virtual actor - Abstract
There is an increasing need of easy and affordable technologies to automatically generate virtual 3D models from their real counterparts. In particular, 3D human reconstruction has driven the creation of many clever techniques, most of them based on the visual hull (VH) concept. Such techniques do not require expensive hardware; however, they tend to yield 3D humanoids with realistic bodies but mediocre faces, since VH cannot handle concavities. On the other hand, structured light projectors allow to capture very accurate depth data, and thus to reconstruct realistic faces, but they are too expensive to use several of them. We have developed a technique to merge a VH-based 3D mesh of a reconstructed humanoid and the depth data of its face, captured by a single structured light projector. By combining the advantages of both systems in a simple setting, we are able to reconstruct realistic 3D human models with believable faces.
- Published
- 2012
17. Face Lift Surgery for Reconstructed Virtual Humans
- Author
-
Sergio Arnaldo, Francisco Morán, and Rafael Pagés
- Subjects
Human head ,business.industry ,Lift (data mining) ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Solid modeling ,Iterative reconstruction ,Computer graphics ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Polygon mesh ,Computer vision ,Artificial intelligence ,business ,Computer facial animation ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We introduce an innovative, semi-automatic method to transform low resolution facial meshes into high definition ones, based on the tailoring of a generic, neutral human head model, designed by an artist, to fit the facial features of a specific person. To determine these facial features we need to select a set of "control points" (corners of eyes, lips, etc.) in at least two photographs of the subject's face. The neutral head mesh is then automatically reshaped according to the relation between the control points in the original subject's mesh through a set of transformation pyramids. The last step consists in merging both meshes and filling the gaps that appear in the previous process. This algorithm avoids the use of expensive and complicated technologies to obtain depth maps, which also need to be meshed later.
- Published
- 2011
18. ITEM: Inter-Texture Error Measurement for 3D Meshes
- Author
-
Rafael Pagés, David Fuentes, and Francisco Morán
- Subjects
Texture atlas ,Telecomunicaciones ,Texture compression ,Mean squared error ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,020207 software engineering ,02 engineering and technology ,Rendering (computer graphics) ,Texture filtering ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Polygon mesh ,Artificial intelligence ,Texel ,business ,Texture mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We introduce a simple and innovative method to compare any two texture maps, regardless of their sizes, aspect ratios, or even masks, as long as they are both meant to be mapped onto the same 3D mesh. Our system is based on a zero-distortion 3D mesh unwrapping technique which compares two new adapted texture atlases with the same mask but different texel colors, and whose every texel covers the same area in 3D. Once these adapted atlases are created, we measure their difference with ITEM-RMSE, a slightly modified version of the standard RMSE defined for images. ITEM-RMSE is more meaningful and reliable than RMSE because it only takes into account the texels inside the mask, since they are the only ones that will actually be used during rendering. Our method is not only very useful to compare the space efficiency of different texture atlas generation algorithms, but also to quantify texture loss in compression schemes for multi-resolution textured 3D meshes.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.