2,856 results on '"real-time rendering"'
Search Results
2. Street Gaussians: Modeling Dynamic Urban Scenes with Gaussian Splatting
- Author
-
Yan, Yunzhi, Lin, Haotong, Zhou, Chenxu, Wang, Weijie, Sun, Haiyang, Zhan, Kun, Lang, Xianpeng, Zhou, Xiaowei, Peng, Sida, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices
- Author
-
Ji, Chaojie, Li, Yufeng, Liao, Yiyi, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
4. City-on-Web: Real-Time Neural Rendering of Large-Scale Scenes on the Web
- Author
-
Song, Kaiwen, Zeng, Xiaoyi, Ren, Chenqu, Zhang, Juyong, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
5. On the Error Analysis of 3D Gaussian Splatting and an Optimal Projection Strategy
- Author
-
Huang, Letian, Bai, Jiayang, Guo, Jie, Li, Yuanqi, Guo, Yanwen, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
6. TCLC-GS: Tightly Coupled LiDAR-Camera Gaussian Splatting for Autonomous Driving: Supplementary Materials
- Author
-
Zhao, Cheng, Sun, Su, Wang, Ruoyu, Guo, Yuliang, Wan, Jun-Jun, Huang, Zhou, Huang, Xinyu, Chen, Yingjie Victor, Ren, Liu, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Space-View Decoupled 3D Gaussians for Novel-View Synthesis of Mirror Reflections
- Author
-
Wang, Zhenwu, Li, Zhuopeng, Tang, Zhenhua, Hao, Yanbin, He, Huasen, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hadfi, Rafik, editor, Anthony, Patricia, editor, Sharma, Alok, editor, Ito, Takayuki, editor, and Bai, Quan, editor
- Published
- 2025
- Full Text
- View/download PDF
8. Deblurring 3D Gaussian Splatting
- Author
-
Lee, Byeonghyeon, Lee, Howoong, Sun, Xiangyu, Ali, Usman, Park, Eunbyung, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
9. NGP-RT: Fusing Multi-level Hash Features with Lightweight Attention for Real-Time Novel View Synthesis
- Author
-
Hu, Yubin, Guo, Xiaoyang, Xiao, Yang, Huang, Jingwei, Liu, Yong-Jin, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
10. Frequency-importance gaussian splatting for real-time lightweight radiance field rendering.
- Author
-
Chen, Lizhe, Hu, Yan, Zhang, Yu, Ge, Yuyao, Zhang, Haoyu, and Cai, Xingquan
- Subjects
ADAPTIVE control systems ,RADIANCE ,DATA modeling ,MEMORY ,STORAGE - Abstract
Recently, there have been significant developments in the realm of novel view synthesis relying on radiance fields. By incorporating the Splatting technique, a new approach named Gaussian Splatting has achieved superior rendering quality and real-time performance. However, the training process of the approach incurs significant performance overhead, and the model obtained from training is very large. To address these challenges, we improve Gaussian Splatting and propose Frequency-Importance Gaussian Splatting. Our method reduces the performance overhead by extracting the frequency features of the scene. First, we analyze the advantages and limitations of the spatial sampling strategy of the Gaussian Splatting method from the perspective of sampling theory. Second, we design the Enhanced Gaussian to more effectively express the high-frequency information, while reducing the performance overhead. Third, we construct a frequency-sensitive loss function to enhance the network's ability to perceive the frequency domain and optimize the spatial structure of the scene. Finally, we propose a Dynamically Adaptive Density Control Strategy based on the degree of reconstruction of the background of the scene, which adaptive the spatial sample point generation strategy dynamically according to the training results and prevents the generation of redundant data in the model. We conducted experiments on several commonly used datasets, and the results show that our method has significant advantages over the original method in terms of memory overhead and storage usage and can maintain the image quality of the original method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Dynamic Voxel‐Based Global Illumination.
- Author
-
Cosin Ayerbe, Alejandro, Poulin, Pierre, and Patow, Gustavo
- Subjects
- *
LIGHT sources , *RAY tracing , *POLYGONS , *LIGHTING , *SCALABILITY - Abstract
Global illumination computation in real time has been an objective for Computer Graphics since its inception. Unfortunately, its implementation has challenged up to now the most advanced hardware and software solutions. We propose a real‐time voxel‐based global illumination solution for a single light bounce that handles static and dynamic objects with diffuse materials under a dynamic light source. The combination of ray tracing and voxelization on the GPU offers scalability and performance. Our divide‐and‐win approach, which ray traces separately static and dynamic objects, reduces the re‐computation load with updates of any number of dynamic objects. Our results demonstrate the effectiveness of our approach, allowing the real‐time display of global illumination effects, including colour bleeding and indirect shadows, for complex scenes containing millions of polygons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Scene reconstruction techniques for autonomous driving: a review of 3D Gaussian splatting.
- Author
-
Zhu, Huixin, Zhang, Zhili, Zhao, Junyang, Duan, Hui, Ding, Yao, Xiao, Xiongwu, and Yuan, Junsong
- Abstract
As the latest research result of the explicit radiated field technology, 3D Gaussian Splatting (3D GS) replaces the implicit expression represented by Neural Radiated Field (NeRF) and has become the hottest research direction in 3D scene reconstruction. Given the innovative work and vigorous development of 3D GS in autonomous driving, this paper comprehensively reviews and summarizes the existing related research to showcase the evolution of the 3D GS technology and possible future development directions. First, the overall research background of 3D GS is introduced based on two aspects 3D scene reconstruction methods and 3D GS research progress. Second, the relevant knowledge points of 3D GS and the core formulas to clarify the mathematical mechanism of 3D GS are presented. Third, the primary applications of the 3D scene reconstruction technology based on 3D GS in automatic driving are presented through new perspective synthesis, scene understanding, and simultaneous localization and map building (SLAM). Finally, the research frontier directions of 3D GS in autonomous driving are described, including structure optimization, 4D scene reconstruction, and cross-domain research. This paper may provide an effective and convenient pathway for researchers to understand, explore, apply this novel research method, and promote the development and application of 3D GS in automatic driving. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. RenderKernel: High-level programming for real-time rendering systems
- Author
-
Jinyuan Yang, Soumyabrata Dev, and Abraham G. Campbell
- Subjects
Heterogeneous programming ,High-level programming ,Real-time rendering ,Rendering systems ,Information technology ,T58.5-58.64 - Abstract
Real-time rendering applications leverage heterogeneous computing to optimize performance. However, software development across multiple devices presents challenges, including data layout inconsistencies, synchronization issues, resource management complexities, and architectural disparities. Additionally, the creation of such systems requires verbose and unsafe programming models. Recent developments in domain-specific and unified shading languages aim to mitigate these issues. Yet, current programming models primarily address data layout consistency, neglecting other persistent challenges.In this paper, we introduce RenderKernel, a programming model designed to simplify the development of real-time rendering systems. Recognizing the need for a high-level approach, RenderKernel addresses the specific challenges of real-time rendering, enabling development on heterogeneous systems as if they were homogeneous. This model allows for early detection and prevention of errors due to system heterogeneity at compile-time. Furthermore, RenderKernel enables the use of common programming patterns from homogeneous environments, freeing developers from the complexities of underlying heterogeneous systems. Developers can focus on coding unique application features, thereby enhancing productivity and reducing the cognitive load associated with real-time rendering system development.
- Published
- 2024
- Full Text
- View/download PDF
14. Mix‐Max: A Content‐Aware Operator for Real‐Time Texture Transitions.
- Author
-
Fournier, Romain and Sauvage, Basile
- Subjects
- *
DISTRIBUTION (Probability theory) , *VIDEO processing , *ALGORITHMS - Abstract
Mixing textures is a basic and ubiquitous operation in data‐driven algorithms for real‐time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel‐wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant‐time and parallel evaluation of the resulting mix over square footprints of MIP‐maps, making our operator suitable for real‐time rendering. We also develop a micro‐priority model, inspired by micro‐geometry models in rendering, which represents sub‐pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Deep and Fast Approximate Order Independent Transparency.
- Author
-
Tsopouridis, Grigoris, Vasilakis, Andreas A., and Fudos, Ioannis
- Subjects
- *
DEEP learning , *MACHINE learning , *SOURCE code , *TRIANGLES , *PIXELS - Abstract
We present a machine learning approach for efficiently computing order independent transparency (OIT) by deploying a light weight neural network implemented fully on shaders. Our method is fast, requires a small constant amount of memory (depends only on the screen resolution and not on the number of triangles or transparent layers), is more accurate as compared to previous approximate methods, works for every scene without setup and is portable to all platforms running even with commodity GPUs. Our method requires a rendering pass to extract all features that are subsequently used to predict the overall OIT pixel colour with a pre‐trained neural network. We provide a comparative experimental evaluation and shader source code of all methods for reproduction of the experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. AI-IMAGE REPRESENTATION AND LINEAR REPRENDER RENDERING.
- Author
-
Harsha, B. K., Rao, B. Srinivasa, Tamijeselvan, S., Ganesha, M., and Behera, Nihar Ranjan
- Subjects
CONVOLUTIONAL neural networks ,COMPUTER graphics ,IMAGE representation ,RAY tracing ,SIGNAL-to-noise ratio ,DEEP learning - Abstract
Image representation and rendering have become critical in numerous applications such as virtual reality, medical imaging, and computer graphics. Traditional rendering techniques often face challenges in efficiently handling complex scenes and achieving photorealistic results while maintaining low computational costs. The problem lies in the high-dimensional nature of image data, leading to slow processing times and reduced scalability. This research presents an AI-enhanced technique called Linear RepRender, which leverages deep learning to transform high-dimensional image representations into simplified linear forms for faster rendering. The proposed method employs a combination of convolutional neural networks (CNNs) and linear regression models to reduce image complexity. Specifically, the CNN extracts low-level and high-level features from the image, while the linear regression step approximates the scene's core visual elements. This hybrid approach significantly improves rendering speed without sacrificing image quality. Furthermore, the method incorporates a loss function optimized for minimizing discrepancies between the rendered and ground truth images. Experimental results demonstrate that Linear RepRender outperforms traditional rendering algorithms, such as ray tracing and rasterization, in terms of computational efficiency and visual accuracy. On a dataset of complex 3D scenes, the proposed method achieved a 35% reduction in rendering time and a 22% improvement in peak signal-to-noise ratio (PSNR) compared to state-of-the-art methods. Additionally, Linear RepRender was able to handle up to 1.5 million polygons per scene with minimal visual artifacts, making it suitable for real-time applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. FPO++: efficient encoding and rendering of dynamic neural radiance fields by analyzing and enhancing Fourier PlenOctrees.
- Author
-
Rabich, Saskia, Stotko, Patrick, and Klein, Reinhard
- Subjects
- *
RADIANCE , *ARCHAEOLOGY methodology , *ENCODING , *TRANSFER functions , *CHARACTERISTIC functions - Abstract
Fourier PlenOctrees have shown to be an efficient representation for real-time rendering of dynamic neural radiance fields (NeRF). Despite its many advantages, this method suffers from artifacts introduced by the involved compression when combining it with recent state-of-the-art techniques for training the static per-frame NeRF models. In this paper, we perform an in-depth analysis of these artifacts and leverage the resulting insights to propose an improved representation. In particular, we present a novel density encoding that adapts the Fourier-based compression to the characteristics of the transfer function used by the underlying volume rendering procedure and leads to a substantial reduction of artifacts in the dynamic model. We demonstrate the effectiveness of our enhanced Fourier PlenOctrees in the scope of quantitative and qualitative evaluations on synthetic and real-world scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A Tiny Example Based Procedural Model for Real-Time Glinty Appearance Rendering.
- Author
-
Xing, You-Xin, Tan, Hao-Wen, Xu, Yan-Ning, and Wang, Lu
- Subjects
GAUSSIAN distribution ,DISTRIBUTION (Probability theory) ,EVALUATION methodology ,MICROSTRUCTURE ,REFLECTANCE - Abstract
The glinty details from complex microstructures significantly enhance rendering realism. However, the previous methods use high-resolution normal maps to define each micro-geometry, which requires huge memory overhead. This paper observes that many self-similarity materials have independent structural characteristics, which we define as tiny example microstructures. We propose a procedural model to represent microstructures implicitly by performing spatial transformations and spatial distribution on tiny examples. Furthermore, we precompute normal distribution functions (NDFs) by 4D Gaussians for tiny examples and store them in multi-scale NDF maps. Combined with a tiny example based NDF evaluation method, complex glinty surfaces can be rendered simply by texture sampling. The experimental results show that our tiny example based the microstructure rendering method is GPU-friendly, successfully reproducing high-frequency reflection features of different microstructures in real time with low memory and computational overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. EndoGSLAM: Real-Time Dense Reconstruction and Tracking in Endoscopic Surgeries Using Gaussian Splatting
- Author
-
Wang, Kailing, Yang, Chen, Wang, Yuehao, Li, Sikuang, Wang, Yan, Dou, Qi, Yang, Xiaokang, Shen, Wei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
20. Innovating Architectural Preservation and Adaptive Reuse Design Education in Hunan: The Digital Turn from hBIM Modelling to Digital Replicas
- Author
-
Rigamonti, Matteo, Augelli, Francesco, Bandarin, Francesco, Editor-in-Chief, Roders, Ana Pereira, Editorial Board Member, Préteceille, Edmond, Editorial Board Member, Thomsen, Hans, Editorial Board Member, Kunzmann, Klaus, Editorial Board Member, Li, Kuanghan, Editorial Board Member, Kong, Lily, Editorial Board Member, García Cabeza, Marisol, Editorial Board Member, Tsukagoshi, Minoru, Editorial Board Member, Vawda, Shahid, Editorial Board Member, Tangari, Vera Regina, Editorial Board Member, Greffe, Xavier, Editorial Board Member, Erkan, Yonca, Editorial Board Member, Augelli, Francesco, editor, and Rigamonti, Matteo, editor
- Published
- 2024
- Full Text
- View/download PDF
21. Dynamic Real-Time Spatio-Temporal Acquisition and Rendering in Adverse Environments
- Author
-
Dutta, Somnath, Ganovelli, Fabio, Cignoni, Paolo, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Grueau, Cédric, editor, Rodrigues, Armanda, editor, and Ragia, Lemonia, editor
- Published
- 2024
- Full Text
- View/download PDF
22. FASSET: Frame Supersampling and Extrapolation Using Implicit Neural Representations of Rendering Contents
- Author
-
Qin, Haoyu, Zhang, Haonan, Guo, Jie, Yang, Ming, Bai, Wenyang, Guo, Yanwen, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zhang, Fang-Lue, editor, and Sharf, Andrei, editor
- Published
- 2024
- Full Text
- View/download PDF
23. Photorealistic Aquatic Plants Rendering with Cellular Structure
- Author
-
Su, Jianping, Xie, Ning, Lou, Xin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sheng, Bin, editor, Bi, Lei, editor, Kim, Jinman, editor, Magnenat-Thalmann, Nadia, editor, and Thalmann, Daniel, editor
- Published
- 2024
- Full Text
- View/download PDF
24. A ReSTIR GI Method Using the Sample-Space Filtering
- Author
-
Jiang, Jie, Xu, Xiang, Wang, Beibei, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sheng, Bin, editor, Bi, Lei, editor, Kim, Jinman, editor, Magnenat-Thalmann, Nadia, editor, and Thalmann, Daniel, editor
- Published
- 2024
- Full Text
- View/download PDF
25. Real-Time Motion Blur Using Multi-Layer Motion Vectors.
- Author
-
Lee, Donghyun, Kwon, Hyeoksu, and Oh, Kyoungsu
- Subjects
IMAGE processing ,MOTION - Abstract
Traditional methods for motion blur, often relying on a single layer, deviate from the correct colors. We propose a multilayer rendering method that closely approximates the motion blur effect. Our approach stores motion vectors for each pixel, divides these vectors into multiple sample points, and performs a backward search from the current pixel. The color at a sample point is sampled if it shares the same motion vector as its origin. This procedure repeats across layers, with only the nearest color values sampled for depth testing. The average color sampled at each point becomes that of the motion blur. Our experimental results indicate that our method significantly reduces the color deviation commonly found in traditional approaches, achieving structural similarity index measures (SSIM) of 0.8 and 0.92, which represent substantial improvements over the accumulation method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Real-time Neural Appearance Models.
- Author
-
ZELTNER, TIZIAN, ROUSSELLE, FABRICE, WEIDLICH, ANDREA, CLARBERG, PETRIK, NOVÁK, JAN, BITTERLI, BENEDIKT, EVANS, ALEX, DAVIDOVIČ, TOMÁŠ, KALLWEIT, SIMON, and LEFOHN, AARON
- Subjects
RAY tracing ,RAY tracing algorithms ,SCALABILITY ,REFLECTANCE - Abstract
We present a complete system for real-time rendering of scenes with complex appearance previously reserved for offline use. This is achieved with a combination of algorithmic and system level innovations. Our appearance model utilizes learned hierarchical textures that are interpreted using neural decoders, which produce reflectance values and importance-sampled directions. To best utilize the modeling capacity of the decoders, we equip the decoders with two graphics priors. The first prior—transformation of directions into learned shading frames—facilitates accurate reconstruction of mesoscale effects. The second prior—a microfacet sampling distribution—allows the neural decoder to perform importance sampling efficiently. The resulting appearance model supports anisotropic sampling and level-of-detail rendering, and allows baking deeply layered material graphs into a compact unified neural representation. By exposing hardware accelerated tensor operations to ray tracing shaders, we show that it is possible to inline and execute the neural decoders efficiently inside a real-time path tracer. We analyze scalability with increasing number of neural materials and propose to improve performance using code optimized for coherent and divergent execution. Our neural material shaders can be over an order of magnitude faster than non-neural layered materials. This opens up the door for using film-quality visuals in real-time applications such as games and live previews. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Real-Time Wave Simulation of Large-Scale Open Sea Based on Self-Adaptive Filtering and Screen Space Level of Detail.
- Author
-
Duan, Xi, Liu, Jian, and Wang, Xinjie
- Subjects
CAMERA movement ,OCEAN engineering ,ADAPTIVE filters ,FILTERS & filtration ,SEA level ,DEGREES of freedom ,COMPUTER graphics - Abstract
The real-time simulation technology of large-scale open sea surfaces has been of great importance in fields such as computer graphics, ocean engineering, and national security. However, existing technologies typically have performance requirements or platform limitations, and the two are often impossible to balance. Based on the camera-view-based screen space level of detail strategy and virtual camera pose adaptive filtering strategy proposed in this article, we have developed a fast and comprehensive solution for rendering large-scale open sea surfaces. This solution is designed to work without the need for special hardware extensions, making it easy to deploy across various platforms. Additionally, it enhances the degrees of freedom of virtual camera movement. After conducting performance tests under various camera poses, our filtering strategy was found to be effective. Notably, the time cost of simulation using 60 waves at the height of 6 m above sea level was only 0.184 ms. In addition, we conducted comparative experiments with four state-of-the-art algorithms currently in use, and our solution outperformed the others with the best performance and suboptimal visual effects. These results demonstrate the superiority of our approach in terms of both efficiency and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A Visual Comparison Study of Ocean Wave Effect Using Real-Time Engine.
- Author
-
Jiani Zhou
- Subjects
OCEAN waves ,PROCESS capability ,COMPUTER performance ,TECHNOLOGICAL innovations ,COMPUTER graphics - Abstract
With the enhancement Computer Graphics (CG) processing capabilities, the richness of digital content creation has been significantly improved. In particular, the improvement of computer performance has considerably promoted technological innovation in the visual effects field. The practical significance of Visual effects (VFX) for digital content has gradually transformed into an essential form of expression from the beginning of auxiliary embellishment. This means that the VFX team needs to spend more time and manpower to make changes in response to the feedback, which leads to delays in the project schedule. In this paper we propose a workflow to reduce the rendering time of ocean wave effects at certain camera distances. Token two non-dynamically driven ocean effects, the image similarity of the camera views at different distances from the sea surface under the two software is compared and the time is recorded. Found image similarity in the 80%+ range and dramatically reducing rendering time has helped the VFX industry solve the long-standing problem of excessive rendering time consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Supporting Unified Shader Specialization by Co-opting C++ Features
- Author
-
Seitz, Kerry A, Foley, Theresa, Porumbescu, Serban D, and Owens, John D
- Subjects
Shaders ,Shading Languages ,Real-Time Rendering ,Heterogeneous Programming ,Unified Programming ,cs.GR ,cs.PL ,I.3.6 ,D.3.2 ,D.3.4 - Abstract
Modern unified programming models (such as CUDA and SYCL) that combine host (CPU) code and GPU code into the same programming language, same file, and same lexical scope lack adequate support for GPU code specialization, which is a key optimization in real-time graphics. Furthermore, current methods used to implement specialization do not translate to a unified environment. In this paper, we create a unified shader programming environment in C++ that provides first-class support for specialization by co-opting C++'s attribute and virtual function features and reimplementing them with alternate semantics to express the services required. By co-opting existing features, we enable programmers to use familiar C++ programming techniques to write host and GPU code together, while still achieving efficient generated C++ and HLSL code via our source-to-source translator.
- Published
- 2022
30. Polynomial for real-time rendering of neural radiance fields
- Author
-
Zhu, Liping, Zhou, Haibo, Wu, Silin, Cheng, Tianrong, and Sun, Hongjun
- Published
- 2024
- Full Text
- View/download PDF
31. State of the Art in Efficient Translucent Material Rendering with BSSRDF.
- Author
-
Liang, Shiyu, Gao, Yang, Hu, Chonghao, Zhou, Peng, Hao, Aimin, Wang, Lili, and Qin, Hong
- Subjects
- *
REFLECTANCE - Abstract
Sub‐surface scattering is always an important feature in translucent material rendering. When light travels through optically thick media, its transport within the medium can be approximated using diffusion theory, and is appropriately described by the bidirectional scattering‐surface reflectance distribution function (BSSRDF). BSSRDF methods rely on assumptions about object geometry and light distribution in the medium, which limits their applicability to general participating media problems. However, despite the high computational cost of path tracing, BSSRDF methods are often favoured due to their suitability for real‐time applications. We review these methods and discuss the most recent breakthroughs in this field. We begin by summarizing various BSSRDF models and then implement most of them in a 2D searchlight problem to demonstrate their differences. We focus on acceleration methods using BSSRDF, which we categorize into two primary groups: pre‐computation and texture methods. Then we go through some related topics, including applications and advanced areas where BSSRDF is used, as well as problems that are sometimes important yet are ignored in sub‐surface scattering estimation. In the end of this survey, we point out remaining constraints and challenges, which may motivate future work to facilitate sub‐surface scattering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Real‐time Terrain Enhancement with Controlled Procedural Patterns.
- Author
-
Grenier, C., Guérin, É., Galin, É., and Sauvage, B.
- Subjects
- *
ARTIST-model relationships , *RELIEF models , *LANDFORMS , *GEOMETRIC modeling , *EROSION - Abstract
Assisting the authoring of virtual terrains is a perennial challenge in the creation of convincing synthetic landscapes. Particularly, there is a need for augmenting artist‐controlled low‐resolution models with consistent relief details. We present a structured noise that procedurally enhances terrains in real time by adding spatially varying erosion patterns. The patterns can be cascaded, i.e. narrow ones are nested into large ones. Our model builds upon the Phasor noise, which we adapt to the specific characteristics of terrains (water flow, slope orientation). Relief details correspond to the underlying terrain characteristics and align with the slope to preserve the coherence of generated landforms. Moreover, our model allows for artist control, providing a palette of control maps, and can be efficiently implemented in graphics hardware, thus allowing for real‐time synthesis and rendering, therefore permitting effective and intuitive authoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Implementing Many-Lights Rendering with IES-Based Lights.
- Author
-
Gadia, Davide, Lombardo, Vincenzo, Maggiorini, Dario, and Natilla, Antonio
- Subjects
POINT processes ,ELECTRONIC data processing ,LIGHTING ,LIGHT sources - Abstract
In recent years, many research projects on real-time rendering have focused on the introduction of global illumination effects in order to improve the realism of a virtual scene. The main goal of these works is to find a compromise between the achievable quality of the resulting rendering and the intrinsic high computational cost of global illumination. An established approach is based on the use of Virtual Point Lights, i.e., "fictitious" light sources that are placed on surfaces in the scene. These lights simulate the contribution of light rays emitted by the light sources and bouncing on different objects. Techniques using Virtual Point Lights are often called Many-Lights techniques. In this paper, we propose an extension of a real-time Many-Lights rendering technique characterized by the integration of photometric data in the process of Virtual Point Lights distribution. We base the definition of light sources and the creation of Virtual Point Lights on the description provided in the IES standard format, created by the Illuminating Engineering Society (IES). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Decorrelating ReSTIR Samplers via MCMC Mutations.
- Author
-
Sawhney, Rohan, Lin, Daqi, Kettunen, Markus, Bitterli, Benedikt, Ramamoorthi, Ravi, Wyman, Chris, and Pharr, Matt
- Subjects
GIBBS sampling ,MARKOV chain Monte Carlo ,PIXELS ,LIGHTING - Abstract
Monte Carlo rendering algorithms often utilize correlations between pixels to improve efficiency and enhance image quality. For real-time applications in particular, repeated reservoir resampling offers a powerful framework to reuse samples both spatially in an image and temporally across multiple frames. While such techniques achieve equal-error up to 100X faster for real-time direct lighting [Bitterli et al. 2020] and global illumination [Ouyang et al. 2021; Lin et al. 2021], they are still far from optimal. For instance, spatiotemporal resampling often introduces noticeable correlation artifacts, while reservoirs holding more than one sample suffer from impoverishment in the form of duplicate samples. We demonstrate how interleaving Markov Chain Monte Carlo (MCMC) mutations with reservoir resampling helps alleviate these issues, especially in scenes with glossy materials and difficult-to-sample lighting. Moreover, our approach does not introduce any bias, and in practice, we find considerable improvement in image quality with just a single mutation per reservoir sample in each frame. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A survey of real-time rendering on Web3D application
- Author
-
Geng Yu, Chang Liu, Ting Fang, Jinyuan Jia, Enming Lin, Yiqiang He, Siyuan Fu, Long Wang, Lei Wei, and Qingyu Huang
- Subjects
Web3D ,Real-time rendering ,Virtual reality ,Cloud rendering ,Mobile Internet ,Computer engineering. Computer hardware ,TK7885-7895 - Abstract
Background: In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results: Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions: Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.
- Published
- 2023
- Full Text
- View/download PDF
36. Research on the Application of Digital Media Art in Film and Television Animation in Multimedia Perspective
- Author
-
Shen Danni and Dong Yu
- Subjects
digital media art ,film and television animation ,real-time rendering ,motion capture ,65y04 ,Mathematics ,QA1-939 - Abstract
The fusion of digital media art with film and television animation production ushers in a new artistic and technological synergy era. This form of art, merging visual, sound, and digital innovations, faces unparalleled growth opportunities and significant challenges. This paper investigates the impact of digital media art on animation, emphasizing its influence on creative expression and production processes. Through case study analysis, we demonstrate the efficiency gains and aesthetic enhancements afforded by digital media, including a 70% reduction in rendering time and a 40% increase in character animation realism. Incorporating Virtual Reality (V.R.) and Augmented Reality (A.R.) technologies also opens up novel avenues for immersive viewer experiences. Despite facing obstacles in technology adaptation and financial investment, the application of digital media art in animation represents a crucial driver of industry evolution.
- Published
- 2024
- Full Text
- View/download PDF
37. Screen space indirect lighting with visibility bitmask.
- Author
-
Therrien, Olivier, Levesque, Yannick, and Gilet, Guillaume
- Subjects
- *
LIGHTING , *ANGLES , *NOISE - Abstract
Horizon-based indirect illumination efficiently estimates a diffuse light bounce in screen space by analytically integrating the horizon angle difference between samples along a given direction. Like other horizon-based methods, this technique cannot properly simulate light passing behind thin surfaces. We propose the concept of a visibility bitmask that replaces the two horizon angles by a bit field representing the binary state (occluded/un-occluded) of N sectors uniformly distributed around the hemisphere slice. It allows light to pass behind surfaces of constant thickness while keeping the efficiency of horizon-based methods. It can also do more accurate ambient lighting than bent normal by sampling more than one visibility cone. This technique improves the visual quality of ambient occlusion, indirect diffuse, and ambient light compared to previous screen space methods while minimizing noise and keeping a low performance overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. 重要性驱动的实时间接光泽反射算法.
- Author
-
陈静, 陈主昕, 郭帆, and 张严辞
- Abstract
Copyright of Journal of Computer-Aided Design & Computer Graphics / Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao is the property of Gai Kan Bian Wei Hui and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
39. Real-Time Motion Blur Using Multi-Layer Motion Vectors
- Author
-
Donghyun Lee, Hyeoksu Kwon, and Kyoungsu Oh
- Subjects
real-time rendering ,motion blur ,image processing ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Traditional methods for motion blur, often relying on a single layer, deviate from the correct colors. We propose a multilayer rendering method that closely approximates the motion blur effect. Our approach stores motion vectors for each pixel, divides these vectors into multiple sample points, and performs a backward search from the current pixel. The color at a sample point is sampled if it shares the same motion vector as its origin. This procedure repeats across layers, with only the nearest color values sampled for depth testing. The average color sampled at each point becomes that of the motion blur. Our experimental results indicate that our method significantly reduces the color deviation commonly found in traditional approaches, achieving structural similarity index measures (SSIM) of 0.8 and 0.92, which represent substantial improvements over the accumulation method.
- Published
- 2024
- Full Text
- View/download PDF
40. Jrender: An efficient differentiable rendering library based on Jittor
- Author
-
Hanggao Xin, Chenzhong Xiang, Wenyang Zhou, and Dun Liang
- Subjects
Differentiable rendering ,Real-time rendering ,Deep learning ,Science ,Technology (General) ,T1-995 - Abstract
Differentiable rendering has been proven as a powerful tool to bridge 2D images and 3D models. With the aid of differentiable rendering, tasks in computer vision and computer graphics could be solved more elegantly and accurately. To address challenges in the implementations of differentiable rendering methods, we present an efficient and modular differentiable rendering library named Jrender based on Jittor. Jrender supports surface rendering for 3D meshes and volume rendering for 3D volumes. Compared with previous differentiable renderers, Jrender exhibits a significant improvement in both performance and rendering quality. Due to the modular design, various rendering effects such as PBR materials shading, ambient occlusions, soft shadows, global illumination, and subsurface scattering could be easily supported in Jrender, which are not available in other differentiable rendering libraries. To validate our library, we integrate Jrender into applications such as 3D object reconstruction and NeRF, which show that our implementations could achieve the same quality with higher performance.
- Published
- 2023
- Full Text
- View/download PDF
41. LUX IN TENEBRIS: UN FLUJO DE TRABAJO PARA DIGITALIZAR Y VISUALIZAR OBRAS DE ARTE PICTÓRICAS EN CONTEXTOS MUSEÍSTICOS COMPLEJOS.
- Author
-
Ivan Apollonio, Fabrizio, Fantini, Filippo, and Garagnani, Simone
- Subjects
- *
MUSEUMS , *MUSEUM studies , *ARTISTIC photography , *REAL-time rendering (Computer graphics) - Abstract
In museum contexts, it is common to meet logistical limitations that make it difficult to document artworks. The pictorial heritage introduces further challenges due to the nonmodifiability of the lighting setup, which is based on preservation and communication criteria that emphasize the material and execution characteristics of the artwork. This can result in situations that are unsuitable for conducting surveys based on photographic images, which are already complicated by the optical properties of areas covered in gold leaf. The data processing technique described here was developed for the documentation of the Annunciation (1430-32) by Fra Giovanni Angelico (Museo Basilica S. Maria delle Grazie, San Giovanni Valdarno, Arezzo) and it offers the elimination and mitigation of undesirable phenomena due to the specific conditions of the museum context. A strategy capable of removing shadows and chiaroscuro effect from the textures associated with the digital model of the painting and its frame is presented, ensuring its visual reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. PVLI: potentially visible layered image for real-time ray tracing.
- Author
-
Kravec, Jaroslav, Káčerik, Martin, and Bittner, Jiří
- Subjects
- *
STREAMING video & television , *RAY tracing , *SURFACE interactions , *VIRTUAL reality , *RAY tracing algorithms - Abstract
Novel view synthesis is frequently employed in video streaming, temporal upsampling, or virtual reality. We propose a new representation, potentially visible layered image (PVLI), that uses a combination of a potentially visible set of the scene geometry and layered color images. PVLI encodes the depth implicitly and enables cheap run-time reconstruction. Furthermore, PVLI can also be used to reconstruct pixel and layer connectivities, which is crucial for filtering and post-processing of the rendered images. We use PVLIs to achieve local and server-based real-time ray tracing. In the first case, PVLIs are used as a basis for temporal and spatial upsampling of ray-traced illumination. In the second case, PVLIs are compressed, streamed over the network, and then used by a thin client to perform temporal and spatial upsampling and to hide latency. To shade the view, we use path tracing, accounting for effects such as soft shadows, global illumination, and physically based refraction. Our method supports dynamic lighting, and up to a limited extent, it also handles view-dependent surface interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. 3D Gaussian Splatting for Real-Time Radiance Field Rendering.
- Author
-
Kerbl, Bernhard, Kopanas, Georgios, Leimkuehler, Thomas, and Drettakis, George
- Subjects
RADIANCE ,CAMERA calibration ,VIDEO compression - Abstract
Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Effect-based Multi-viewer Caching for Cloud-native Rendering.
- Author
-
Weinrauch, Alexander, Tatzgern, Wolfgang, Stadlbauer, Pascal, Crickx, Alexis, Hladky, Jozef, Coomans, Arno, Winter, Martin, Mueller, Joerg H., and Steinberger, Markus
- Subjects
RENDERING (Computer graphics) ,UBIQUITOUS computing ,VIRTUAL reality ,CLOUD computing ,RAY tracing - Abstract
With cloud computing becoming ubiquitous, it appears as virtually everything can be offered as-a-service. However, real-time rendering in the cloud forms a notable exception, where the cloud adoption stops at running individual game instances in compute centers. In this paper, we explore whether a cloud-native rendering architecture is viable and scales to multi-client rendering scenarios. To this end, we propose world-space and on-surface caches to share rendering computations among viewers placed in the same virtual world. We discuss how caches can be utilized on an effect-basis and demonstrate that a large amount of computations can be saved as the number of viewers in a scene increases. Caches can easily be set up for various effects, including ambient occlusion, direct illumination, and diffuse global illumination. Our results underline that the image quality using cached rendering is on par with screen-space rendering and due to its simplicity and inherent coherence, cached rendering may even have advantages in single viewer setups. Analyzing the runtime and communication costs, we show that cached rendering is already viable in multi-GPU systems. Building on top of our research, cloud-native rendering may be just around the corner. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes.
- Author
-
Reiser, Christian, Szeliski, Rick, Verbin, Dor, Srinivasan, Pratul, Mildenhall, Ben, Geiger, Andreas, Barron, Jon, and Hedman, Peter
- Subjects
RADIANCE - Abstract
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. ETER: Elastic Tessellation for Real-Time Pixel-Accurate Rendering of Large-Scale NURBS Models.
- Author
-
Xiong, Ruicheng, Lu, Yang, Chen, Cong, Zhu, Jiaming, Zeng, Yajun, and Liu, Ligang
- Subjects
RENDERING (Computer graphics) ,EVALUATION methodology ,SCALABILITY ,TRIANGLES - Abstract
We present ETER, an elastic tessellation framework for rendering large-scale NURBS models with pixel-accurate and crack-free quality at real-time frame rates. We propose a highly parallel adaptive tessellation algorithm to achieve pixel accuracy, measured by the screen space error between the exact surface and its triangulation. To resolve a bottleneck in NURBS rendering, we present a novel evaluation method based on uniform sampling grids and accelerated by GPU Tensor Cores. Compared to evaluation based on hardware tessellation, our method has achieved a significant speedup of 2.9 to 16.2 times depending on the degrees of the patches. We develop an efficient crack-filling algorithm based on conservative rasterization and visibility buffer to fill the tessellation-induced cracks while greatly reducing the jagged effect introduced by conservative rasterization. We integrate all our novel algorithms, implemented in CUDA, into a GPU NURBS rendering pipeline based on Mesh Shaders and hybrid software/hardware rasterization. Our performance data on a commodity GPU show that the rendering pipeline based on ETER is capable of rendering up to 3.7 million patches (0.25 billion tessellated triangles) in real-time (30FPS). With its advantages in performance, scalability, and visual quality in rendering large-scale NURBS models, a real-time tessellation solution based on ETER can be a powerful alternative or even a potential replacement for the existing pre-tessellation solution in CAD systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. AvatarReX: Real-time Expressive Full-body Avatars.
- Author
-
Zheng, Zerong, Zhao, Xiaochen, Zhang, Hongwen, Liu, Boning, and Liu, Yebin
- Subjects
AVATARS (Virtual reality) ,FACIAL expression - Abstract
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data. The learnt avatar not only provides expressive control of the body, hands and the face together, but also supports real-time animation and rendering. To this end, we propose a compositional avatar representation, where the body, hands and the face are separately modeled in a way that the structural prior from parametric mesh templates is properly utilized without compromising representation flexibility. Furthermore, we disentangle the geometry and appearance for each part. With these technical designs, we propose a dedicated deferred rendering pipeline, which can be executed at a real-time framerate to synthesize high-quality free-view images. The disentanglement of geometry and appearance also allows us to design a two-pass training strategy that combines volume rendering and surface rendering for network training. In this way, patch-level supervision can be applied to force the network to learn sharp appearance details on the basis of geometry estimation. Overall, our method enables automatic construction of expressive full-body avatars with real-time rendering capability, and can generate photo-realistic images with dynamic details for novel body motions and facial expressions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Foveated rendering: A state-of-the-art survey
- Author
-
Lili Wang, Xuehuai Shi, and Yi Liu
- Subjects
foveated rendering ,virtual reality (VR) ,real-time rendering ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Recently, virtual reality (VR) technology has been widely used in medical, military, manufacturing, entertainment, and other fields. These applications must simulate different complex material surfaces, various dynamic objects, and complex physical phenomena, increasing the complexity of VR scenes. Current computing devices cannot efficiently render these complex scenes in real time, and delayed rendering makes the content observed by the user inconsistent with the user’s interaction, causing discomfort. Foveated rendering is a promising technique that can accelerate rendering. It takes advantage of human eyes’ inherent features and renders different regions with different qualities without sacrificing perceived visual quality. Foveated rendering research has a history of 31 years and is mainly focused on solving the following three problems. The first is to apply perceptual models of the human visual system into foveated rendering. The second is to render the image with different qualities according to foveation principles. The third is to integrate foveated rendering into existing rendering paradigms to improve rendering performance. In this survey, we review foveated rendering research from 1990 to 2021. We first revisit the visual perceptual models related to foveated rendering. Subsequently, we propose a new foveated rendering taxonomy and then classify and review the research on this basis. Finally, we discuss potential opportunities and open questions in the foveated rendering field. We anticipate that this survey will provide new researchers with a high-level overview of the state-of-the-art in this field, furnish experts with up-to-date information, and offer ideas alongside a framework to VR display software and hardware designers and engineers.
- Published
- 2023
- Full Text
- View/download PDF
49. Towards Rendering the Style of 20th Century Cartoon Line Art in 3D Real-Time
- Author
-
Xu, Peisen, Benvenuti, Davide, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Magnenat-Thalmann, Nadia, editor, Zhang, Jian, editor, Kim, Jinman, editor, Papagiannakis, George, editor, Sheng, Bin, editor, Thalmann, Daniel, editor, and Gavrilova, Marina, editor
- Published
- 2022
- Full Text
- View/download PDF
50. Digging into Radiance Grid for Real-Time View Synthesis with Detail Preservation
- Author
-
Zhang, Jian, Huang, Jinchi, Cai, Bowen, Fu, Huan, Gong, Mingming, Wang, Chaohui, Wang, Jiaming, Luo, Hongchen, Jia, Rongfei, Zhao, Binqiang, Tang, Xing, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.