28 results on '"Pixel shaders"'
Search Results
2. Comparison of Volume Rendering Methods Using GPU and Specialized Volumetric Accelerator
- Author
-
Roman Chekhmestruk, Olexandr Romanyuk, Oksana V. Romanyuk, Sergey O. Romanyuk, and Sergey Vyatkin
- Subjects
Pixel shaders ,Computational complexity theory ,Computer science ,Volume visualization ,Hardware acceleration ,Volume rendering ,Analysis method ,Computational science - Abstract
The chapter identities and analysis methods that can be used to solve the problem of volume visualization, using the capabilities of standard GPU. The average computational complexity of volume visualization algorithms is estimated. The computational complexity of volume rendering is analyzed. An overview of alternative implementations of the volume visualization problem and a comparison of the proposed solution with a system based on a highly specialized hardware accelerator for volume visualization is given.
- Published
- 2021
3. Genetic Programming for Shader Simplification.
- Author
-
Sitthi-amorn, Pitchaya, Modly, Nicholas, Weimer, Westley, and Lawrence, Jason
- Subjects
GENETIC programming ,COMPUTER graphics ,GENETIC algorithms ,COMPUTER algorithms ,COMPUTER programming - Abstract
We present a framework based on Genetic Programming (GP) for automatically simplifying procedural shaders. Our approach computes a series of increasingly simplified shaders that expose the inherent trade-off between speed and accuracy. Compared to existing automatic methods for pixel shader simplification [Olano et al. 2003; Pellacini 2005], our approach considers a wider space of code transformations and produces faster and more faithful results. We further demonstrate how our cost function can be rapidly evaluated using graphics hardware, which allows tens of thousands of shader variants to be considered during the optimization process. Our approach is also applicable to multi-pass shaders and perceptual-based error metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
4. A 2.05 GVertices/s 151 mW Lighting Accelerator for 3D Graphics Vertex and Pixel Shading in 32 nm CMOS
- Author
-
Sanu Mathew, Shekhar Borkar, Steven K. Hsu, Farhana Sheikh, Himanshu Kaul, Ram Krishnamurthy, Amit Agarwal, and Mark A. Anders
- Subjects
Vertex (computer graphics) ,Pixel shaders ,Pixel ,Computer science ,business.industry ,Rendering (computer graphics) ,Computational science ,CMOS ,Hardware acceleration ,Shading ,Specular reflection ,Electrical and Electronic Engineering ,business ,Shader ,3D computer graphics ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Advanced lighting computation is the key ingredient for rendering realistic images in high-throughput 3D graphics pipelines. It is the most performance and power-critical operation in programmable vertex and pixel shaders due to the large number of complex floating-point (FP) multiplications and exponentiations [1]. Performance and energy-efficiency of geometry rendering can be significantly improved by hardware acceleration of lighting computations, which is leveraged by vertex/pixel shader programs residing in the memory of a programmable 3D graphics engine [2] (Fig. 10.4.1). A single-cycle throughput lighting accelerator targeted for on-die acceleration of 3D graphics vertex and pixel shading in high-performance processors and mobile SoCs is fabricated in 32nm high-k metal-gate CMOS [3] (Fig. 10.4.1). Ambient, diffuse, and specular components of the Phong Illumination (PI) equation [4] are computed in parallel in the log domain with 4-cycle latency and 560mV-to-1.2V operation. A high-accuracy 5-segment piecewise linear (PWL) approximation-based log circuit (FPWL-L) with low Hamming weight coefficients, a 32×32b signed truncated specular multiplier, and a high-precision 4-segment PWL approximation-based anti-log circuit (FPWL-AL) enable accurate fixed-point log-domain computation of PI lighting. Five FP multiplications and one FP exponentiation are transformed to five fixed-point additions and one fixed-point multiplication, respectively, resulting in single-cycle lighting throughput of 2.05GVertices/s (measured at 1.05V, 25°C) in a compact area of 0.064mm2 (Fig. 10.4.7) while achieving: (i) 47% reduction in critical path logic stages, (ii) 0.56% mean vertex lighting error compared to a single-precision FP computation, (iii) 354μW active leakage power measured at 1.05V, 25°C, (iv) scalable performance up to 2.22GHz, 232mW measured at 1.2V, and (v) peak energy efficiency of 56GVertices/s/W, measured at 560mV, 25°C.
- Published
- 2013
5. Adventures in ASCII Art
- Author
-
Ian Parberry
- Subjects
Pixel shaders ,Visual Arts and Performing Arts ,media_common.quotation_subject ,ASCII art ,Art ,Animation ,New variant ,Adventure ,Computer Science Applications ,Colored ,Computer graphics (images) ,visual_art ,visual_art.visual_art_medium ,Engineering (miscellaneous) ,Music ,media_common - Abstract
This is an account of the author's three-and-a-half decade obsession with ASCII art in which he progresses from a manual typewriter to computers, pixel shaders, real-time animation, and a new variant of color ASCII art that can be reproduced on a manual typewriter using only three or four colored ribbons.
- Published
- 2014
6. coaster
- Author
-
Robert R. Lewis
- Subjects
Vertex (computer graphics) ,Class (computer programming) ,Pixel shaders ,Computer science ,OpenGL ,Bézier curve ,Animation ,Computer graphics ,Computer graphics (images) ,ComputingMilieux_COMPUTERSANDEDUCATION ,Polygon mesh ,2D computer graphics ,Shader ,3D computer graphics ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
coaster is a project that teaches a semester-long introductory computer graphics class by means of ten programming assignments. The assignments are incremental - each one building on the previous ones - and ultimately require implementation of most of the course content in the final one: a first-person rollercoaster simulation. Briefly described, the assignments (and their course contents) are: circles ("warmup", 2D graphics, applying trigonometry),wire track (3D graphics, parametric curves),wire car (meshes),"hedgehog" car (face and vertex normals),shaded car (lighting models and vertex shaders),shaded track (extrusion, model transforms),surfaces (Bezier surfaces, height maps),first person (viewing transforms, animation, splines),dynamics (physics-based modeling), andtextures (textures, pixel shaders).There is also an eleventh project of the student's own (approved) design. Students are provided with template code for the first ten programming assignments. The languages used are C++ on the CPU and GLSL on the GPU. Students are presumed to have access to OpenGL/GLSL 3.3/3.30 and the GLUT and GLEW libraries.Both undergraduate and graduate students take the class. and it has been presented twice at Washington State University, both times with about half of the students on a remote campus receiving it as a live telecourse. Student response has been very positive.The goal of this lightning talk is to elicit interest from the computer graphics teaching community in making coaster systematically available to other universities by providing source code and training to instructors.
- Published
- 2015
7. Sampling and Modeling of Shape and Reflectance of 3-D Objects
- Author
-
Ryo Furukawa, Yutaka Ohsawa, Hiroshi Kawasaki, and Yasuaki Nakamura
- Subjects
Pixel shaders ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Spherical harmonics ,Reflectivity ,Computer Science Applications ,Rendering (computer graphics) ,Computer graphics (images) ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Reflectance properties ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we developed a method for sampling and modeling reflectance of 3-D objects and performing real-time rendering of modeled objects. Our method can be used to render modeled objects from an arbitrary viewing direction with illuminations from point lights at any given place or with any illumination map. To achieve this goal, we developed a platform that can sample appearances of target objects with 4DOFs parameters of light and viewing directions. Light-dependent variations of sampled data were approximated using spherical harmonics functions as bases. To render a modeled object, we generated its appearance by synthesizing the effects of illuminations using spherical harmonics functions and by interpolating between sampled viewing directions. The appearance was then texture-mapped to the shape model. Using our method, we could represent reflectance properties of surfaces of 3-D objects that were parametrized by 4DOFs light and viewing directions. We also successfully rendered the modeled objects using programmable vertex/pixel shaders with real-time performance.
- Published
- 2006
8. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware
- Author
-
Insung Ihm, Chandrajit L. Bajaj, Jinsang Oh, and Jungki Min
- Subjects
Vertex (computer graphics) ,Graphical processing unit ,Pixel shaders ,Computer science ,Graphics hardware ,OpenGL ,Parallel computing ,System of linear equations ,Computer Graphics and Computer-Aided Design ,Article ,Linear algebra ,SIMD ,Graphics ,Shader - Abstract
The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods.
- Published
- 2004
9. A geometry-based soft shadow volume algorithm using graphics hardware
- Author
-
Tomas Akenine-Möller and Ulf Assarsson
- Subjects
Pixel shaders ,business.industry ,Computer science ,Image quality ,Graphics hardware ,Shadow volume ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Geometry ,Frame rate ,Computer Graphics and Computer-Aided Design ,Aliasing ,Computer graphics (images) ,Shadow ,Computer vision ,Artificial intelligence ,Shadow mapping ,business ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Most previous soft shadow algorithms have either suffered from aliasing, been too slow, or could only use a limited set of shadow casters and/or receivers. Therefore, we present a strengthened soft shadow volume algorithm that deals with these problems. Our critical improvements include robust penumbra wedge construction, geometry-based visibility computation, and also simplified computation through a four-dimensional texture lookup. This enables us to implement the algorithm using programmable graphics hardware, and it results in images that most often are indistinguishable from images created as the average of 1024 hard shadow images. Furthermore, our algorithm can use both arbitrary shadow casters and receivers. Also, one version of our algorithm completely avoids sampling artifacts which is rare for soft shadow algorithms. As a bonus, the four-dimensional texture lookup allows for small textured light sources, and, even video textures can be used as light sources. Our algorithm has been implemented in pure software, and also using the GeForce FX emulator with pixel shaders. Our software implementation renders soft shadows at 0.5--5 frames per second for the images in this paper. With actual hardware, we expect that our algorithm will render soft shadows in real time. An important performance measure is bandwidth usage. For the same image quality, an algorithm using the accumulated hard shadow images uses almost two orders of magnitude more bandwidth than our algorithm.
- Published
- 2003
10. Object Space EWA Surface Splatting: A Hardware Accelerated Approach to High Quality Point Rendering
- Author
-
Hanspeter Pfister, Liu Ren, and Matthias Zwicker
- Subjects
Pixel shaders ,Computer science ,business.industry ,Graphics hardware ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Rendering algorithms ,Computer Graphics and Computer-Aided Design ,Alpha compositing ,Rendering (computer graphics) ,Surfel ,Texture filtering ,Computer graphics (images) ,Polygon ,Computer vision ,Artificial intelligence ,Graphics ,business ,Texture mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Elliptical weighted average (EWA) surface splatting is a technique for high quality rendering of point-sampled 3D objects. EWA surface splatting renders water-tight surfaces of complex point models with high quality, anisotropic texture filtering. In this paper we introduce a new multi-pass approach to perform EWA surface splatting on modern PC graphics hardware, called object space EWA splatting. We derive an object space formulation of the EWA filter, which is amenable for acceleration by conventional triangle-based graphics hardware. We describe how to implement the object space EWA filter using a two pass rendering algorithm. In the first rendering pass, visibility splatting is performed by shifting opaque surfel polygons backward along the viewing rays, while in the second rendering pass view-dependent EWA prefiltering is performed by deforming texture mapped surfel polygons. We use texture mapping and alpha blending to facilitate the splatting process. We implement our algorithm using programmable vertex and pixel shaders, fully exploiting the capabilities of today’s graphics processing units (GPUs). Our implementation renders up to 3 million points per second on recent PC graphics hardware, an order of magnitude more than a pure software implementation of screen space EWA surface splatting.
- Published
- 2002
11. Genetic programming for shader simplification
- Author
-
Jason Lawrence, Pitchaya Sitthi-Amorn, Nicholas Modly, and Westley Weimer
- Subjects
Pixel shaders ,Computer science ,Unified shader model ,Graphics hardware ,Computer graphics (images) ,Procedural texture ,Process (computing) ,Code (cryptography) ,Genetic programming ,Shader ,HLSL2GLSL ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a framework based on Genetic Programming (GP) for automatically simplifying procedural shaders. Our approach computes a series of increasingly simplified shaders that expose the inherent trade-off between speed and accuracy. Compared to existing automatic methods for pixel shader simplification [Olano et al. 2003; Pellacini 2005], our approach considers a wider space of code transformations and produces faster and more faithful results. We further demonstrate how our cost function can be rapidly evaluated using graphics hardware, which allows tens of thousands of shader variants to be considered during the optimization process. Our approach is also applicable to multi-pass shaders and perceptual-based error metrics.
- Published
- 2011
12. Granular visibility queries on the GPU
- Author
-
Thomas Engelhardt and Carsten Dachsbacher
- Subjects
Pixel shaders ,Computer science ,business.industry ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Software rendering ,Real-time rendering ,3D rendering ,Rendering (computer graphics) ,Computer graphics (images) ,Computer vision ,Granularity ,Artificial intelligence ,business ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Efficient visibility queries are key in many interactive rendering techniques, such as occlusion culling, level of detail determination, and perceptual rendering. The occlusion query mechanism natively supported by GPUs is carried out for batches of rendered geometry. In this paper, we present two novel ways of determining visibility by intelligently querying summed area tables and computing a variant of item buffers. This enables visibility queries of finer granularity, e.g., for sub-regions of objects and for instances created within a single draw call. Our method determines the visibility of a large number of objects simultaneously which can be used in geometry shaders to cull triangles, or to control the level of detail in geometry and pixel shaders under certain rendering scenarios. We demonstrate the benefits of our method with two different real-time rendering techniques.
- Published
- 2009
13. Volumetric peeling
- Author
-
Vivek Vaidya, Ravikanth Malladi, Rakesh Mullick, and Navneeth Subramanian
- Subjects
Pixel shaders ,Relation (database) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Volume rendering ,Visualization ,Feature (computer vision) ,Region of interest ,Segmentation ,Computer vision ,Artificial intelligence ,business ,Distance transform - Abstract
Surgeons often desire a volume visualization where a region of interest (vessels, tumor) is isolated and viewed with complete clarity in relation to neighboring landmarks. A method to fade the landmarks/ surrounding structures in relation to the region of interest is desired as it provides context and an appreciation of the surgical approach. Current methods for generating these visualizations are either cumbersome (clip--planes) or require significant preprocessing (segmentation). Transfer functions, which can provide this visualization, are limited to datasets where a relation between voxel intensities and structures exists.We present a powerful, easy-to-use method for exploring volumetric data by controlling visibility according to membership volumes. These membership volumes facilitate a plethora of effects that cannot be achieved using conventional classification methods such as transfer functions. The membership volumes allow assignment of optical properties based on a combination of spatial and intensity criteria. Various membership functions, ranging from analytical to distance map based are shown. Through the use of Boolean operations in pixel shaders, we demonstrate several examples of interactive real-time visualization using our method.
- Published
- 2008
14. Synthesis and rendering of bidirectional texture functions on arbitrary surfaces
- Author
-
Xin Tong, Xinguo Liu, Heung-Yeung Shum, Jingdan Zhang, Yaohua Hu, and Baining Guo
- Subjects
Computer science ,Graphics hardware ,Mesh parameterization ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sensitivity and Specificity ,Rendering (computer graphics) ,User-Computer Interface ,Imaging, Three-Dimensional ,Image texture ,Computer graphics (images) ,Image Interpretation, Computer-Assisted ,Computer Graphics ,Computer vision ,Graphics ,ComputingMethodologies_COMPUTERGRAPHICS ,Pixel shaders ,business.industry ,Texton ,Reproducibility of Results ,Computational geometry ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Visualization ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Bidirectional texture function ,business ,Texture mapping ,Software ,Algorithms ,Texture synthesis - Abstract
The bidirectional texture function (BTF) is a 6D function that describes the appearance of a real-world surface as a function of lighting and viewing directions. The BTF can model the fine-scale shadows, occlusions, and specularities caused by surface mesostructures. In this paper, we present algorithms for efficient synthesis of BTFs on arbitrary surfaces and for hardware-accelerated rendering. For both synthesis and rendering, a main challenge is handling the large amount of data in a BTF sample. To addresses this challenge, we approximate the BTF sample by a small number of 4D point appearance functions (PAFs) multiplied by 2D geometry maps. The geometry maps and PAFs lead to efficient synthesis and fast rendering of BTFs on arbitrary surfaces. For synthesis, a surface BTF can be generated by applying a texton-based sysnthesis algorithm to a small set of 2D geometry maps while leaving the companion 4D PAFs untouched. As for rendering, a surface BTF synthesized using geometry maps is well-suited for leveraging the programmable vertex and pixel shaders on the graphics hardware. We present a real-time BTF rendering algorithm that runs at the speed of about 30 frames/second on a mid-level PC with an ATI Radeon 8500 graphics card. We demonstrate the effectiveness of our synthesis and rendering algorithms using both real and synthetic BTF samples.
- Published
- 2008
15. Real-time video watermarking on programmable graphics hardware
- Author
-
Jiying Zhao and Alan Brunton
- Subjects
Pixel shaders ,Pixel ,Computer science ,Fragment (computer graphics) ,business.industry ,Graphics hardware ,Digital video ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Video processing ,Graphics pipeline ,Real-time computer graphics ,Computer graphics (images) ,Geometric primitive ,Graphics ,business ,Digital watermarking ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we propose a real-time video watermarking system on programmable graphics hardware. Real-time video watermarking is important to the use of digital video in legal proceedings, security surveillance, new reportage and commercial video transactions. The watermarking scheme implemented here is based on Wong's scheme for image watermarking, and is designed to detect and localize any change in the pixels of any frame of the incoming video stream. We implement this scheme for real-time operation on programmable graphics hardware. The graphics processing units (GPUs) found on many modern commodity-level graphics cards have the ability to execute application-defined sequences of instructions on not only geometric primitives, defined by vertices, but also on image or texture fragments mapped to rasterized geometric primitives. These fragment programs, also known as fragment or pixel shaders, execute in hardware and in parallel on the GPU for each fragment, or pixel, that is rendered, making the GPU well suited for image and video processing. We illustrate real-time performance, low perceptibility, and good bit-error rates and localization by way of a general testing framework that allows straightforward testing of any video watermarking system implemented on programmable graphics hardware
- Published
- 2006
16. Subsurface Texture Mapping
- Author
-
Kadi Bouatouch, Guillaume François, Sumanta Pattanaik, Gaspard Breton, Computer generated images, animation, modeling and simulation (SIAMES), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-INRIA Rennes, Institut National de Recherche en Informatique et en Automatique (Inria), France Télécom Recherche & Développement (FT R&D), France Télécom, Graphics Research Group [Florida] (UCF), University of Central Florida [Orlando] (UCF), Perception, decision and action of real and virtual humans in virtual environments and impact on real environments (BUNRAKU), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique, School of Electrical Engineering and Computer Science [Orlando], Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-INRIA Rennes, and Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique
- Subjects
Texture compression ,Surface Properties ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,GPU ,Variable thickness ,02 engineering and technology ,ACM: I.: Computing Methodologies/I.3: COMPUTER GRAPHICS ,Rendering (computer graphics) ,Subsurface scattering \\ Rendu temps réel ,Imaging, Three-Dimensional ,Image texture ,Relief mapping ,Texture filtering ,Computer graphics (images) ,Image Interpretation, Computer-Assisted ,Reel ,Computer Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Computer Simulation ,Computer vision ,Graphics ,Shader ,ComputingMilieux_MISCELLANEOUS ,ComputingMethodologies_COMPUTERGRAPHICS ,Physics ,Projective texture mapping ,Pixel shaders ,business.industry ,Subsurface scattering ,020207 software engineering ,04 agricultural and veterinary sciences ,Realtime graphics ,Models, Theoretical ,Computer Graphics and Computer-Aided Design ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Diffusion sous-surfacique ,040102 fisheries ,0401 agriculture, forestry, and fisheries ,020201 artificial intelligence & image processing ,Shading ,Artificial intelligence ,business ,Texture mapping ,Software ,3D computer graphics - Abstract
Subsurface scattering within translucent objects is a complex phenomenon. Designing and rendering this kind of material requires a faithful description of their aspects as well as a realistic simulation of their interaction with light. This paper presents an efficient rendering technique of multilayered translucent objects. We present a new method for modeling and rendering such complex organic materials made up of multiple layers of variable thickness. Based on the relief texture mapping algorithm, our method calculates the single scattering contribution for this kind of material in real-time using commodity graphics hardware. Our approach needs the calculation of distances traversed by a light ray through a translucent object. This calculation is required for evaluating the attenuation of light within the material. We use a surface approximation algorithm to quickly evaluate these distances. Our whole algorithm is implemented using pixel shaders. \\ La diffusion de la lumière l'intérieur de matériaux participants est un phénomène complexe. Pour modéliser et rendre de tels matériaux, il est nécessaire d'avoir une description adaptée de ceux-ci ainsi qu'une simulation réaliste de leurs interactions avec la lumière. Ce papier présente une technique de rendu adaptée aux matériaux multicouches. Cette nouvelle méthode permet de modéliser des matériaux organiques complexes composés de couches multiples à épaisseur variable. Basée sur l'algorithme du relief mapping, notre méthode permet le calcul temps réel de la diffusion simple pour ce type de matériau, et ce en exploitant les performances des cartes graphiques. Notre méthode nécessite le calcul des distances parcourues par la lumière l'intérieur des diffrentes couches du matériau. Ce calcul est nécessaire pour l'évaluation de l'atténuation de la lumière l'intérieur du matériau. Nous proposons d'utiliser un algorithme d'approximation de surface pour raliser ce calcul rapidement. Notre algorithme est implementé à l'aide de pixel shader.
- Published
- 2006
17. Extensions for 3D Graphics Rendering Engine Used for Direct Tessellation of Spline Surfaces
- Author
-
Eric Roman, Adrian Sfarti, Brian A. Barsky, Alex Kozlowski, Egon Pasztor, Todd J. Kosloff, and Alex Perelman
- Subjects
Computer graphics ,Pixel shaders ,Tessellation ,Computer science ,Computer graphics (images) ,Software rendering ,Subdivision surface ,Non-uniform rational B-spline ,3D computer graphics ,ComputingMethodologies_COMPUTERGRAPHICS ,Rendering (computer graphics) ,Data compression - Abstract
In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal vectors, colors, texture coordinates). We have developed a new 3D graphics architecture using data compression to unclog the bus between the triangle server and the rendering engine. This new architecture has been described in [1]. In the present paper we describe further developments of the newly proposed architecture. The current paper shows several interesting extensions of our architecture such as backsurface rejection, NURBS real time tesselation and a description of a surface based API. We also show how the implementation of our architecture operates on top of the pixel shaders.
- Published
- 2006
18. GPU-driven recombination and transformation of YCoCg-R video samples
- Author
-
Van Rijsselbergen, Dieter, De Neve, Wesley, Van de Walle, Rik, and Silva Martinez, J
- Subjects
pixel shaders ,Technology and Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,GPU ,YCoCg-R ,H.264/AVC ,performance ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Common programmable Graphics Processing Units (GPU) are capable of more than just rendering real-time effects for games. They can also be used for image processing and the acceleration of video decoding. This paper describes an extended implementation of the H.264/AVC YCoCg-R to RGB color space transformation on the GPU. Both the color space transformation and recombination of the color samples from a nontrivial data layout are performed by the GPU. Using mid- to high-range GPUs, this extended implementation offers a significant gain in processing speed compared to an existing basic GPU version and an optimized CPU implementation. An ATI X1900 GPU was capable of processing more than 73 high-resolution 1080p YCoCg-R frames per second, which is over twice the speed of the CPU-only transformation using a Pentium D 820.
- Published
- 2006
19. GPU-assisted Collision Detection
- Author
-
Christer Ericson
- Subjects
Pixel shaders ,Computer science ,Computation ,Parallel computing ,Collision ,Stencil ,Rendering (computer graphics) ,Computer Science::Performance ,Computer Science::Graphics ,Computer Science::Mathematical Software ,Collision detection ,Central processing unit ,Graphics ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Today, commodity graphics processing units (GPUs) have advanced to a state in which more raw processing power is inherent in GPUs than in main CPUs. This development is facilitated by GPUs working in a very restricted domain, allowing rendering tasks to be parallelized across many deeply pipelined, highly specialized computational units. This parallelism gives an overall speed advantage to GPUs, even though GPUs typically work at lower clock speeds than CPUs. Coupled with recent GPU improvements, such as increased programmability of vertex and pixel shaders and the introduction of floating-point textures, the computational power of GPUs has generated a lot of interest in mapping nonrendering-related, general-purpose computations — normally performed on the CPU — onto the GPU. For collision detection, there are two primary ways in which to utilize GPUs: for fast image-space-based intersection techniques or as a co-processor for accelerating mathematics or geometry calculations. Image-space-based intersection techniques rely on rasterizing the objects of a collision query into color, depth, or stencil buffers and from that determining whether the objects are in intersection.
- Published
- 2005
20. Evolution of Vertex and Pixel Shaders
- Author
-
Jürgen Albert, Markus Reinhardt, and Marc Ebner
- Subjects
Vertex (computer graphics) ,Pixel shaders ,Multiple Render Targets ,Render Target ,Computer science ,Graphics hardware ,Software rendering ,Volume rendering ,Rendering (computer graphics) ,Computer graphics (images) ,Shadow ,Polygon ,Shading ,Shading language ,Graphics ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In real-time rendering, objects are represented using polygons or triangles. Triangles are easy to render and graphics hardware is highly optimized for rendering of triangles. Initially, the shading computations were carried out by dedicated hardwired algorithms for each vertex and then interpolated by the rasterizer. Todays graphics hardware contains vertex and pixel shaders which can be reprogrammed by the user. Vertex and pixel shaders allow almost arbitrary computations per vertex respectively per pixel. We have developed a system to evolve such programs. The system runs on a variety of graphics hardware due to the use of NVIDIA's high level Cg shader language. Fitness of the shaders is determined by user interaction. Both fixed length and variable length genomes are supported. The system is highly customizable. Each individual consists of a series of meta commands. The resulting Cg program is translated into the low level commands which are required for the particular graphics hardware.
- Published
- 2005
21. Real-time High-Quality View-Dependent Texture Mapping using Per-Pixel Visibility
- Author
-
Djamchid Ghazanfarpour, Damien Porquet, Jean-Michel Dischler, Vieceli, Yolande, DMI, XLIM (XLIM), and Université de Limoges (UNILIM)-Centre National de la Recherche Scientifique (CNRS)-Université de Limoges (UNILIM)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Computer science ,Graphics hardware ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,3D rendering ,Rendering (computer graphics) ,Computer graphics (images) ,Computer vision ,Polygon mesh ,Tiled rendering ,ComputingMilieux_MISCELLANEOUS ,ComputingMethodologies_COMPUTERGRAPHICS ,Texture atlas ,Projective texture mapping ,Pixel shaders ,Parallel rendering ,business.industry ,Software rendering ,Volume rendering ,Terrain rendering ,Image-based modeling and rendering ,Real-time rendering ,Artificial intelligence ,business ,Alternate frame rendering ,Texture memory ,Texture mapping - Abstract
We present an extension of View-Dependent Texture Mapping (VDTM) allowing rendering of complex geometric meshes at high frame rates without usual blurring or skinning artifacts. We combine a hybrid geometric and image-based representation of a given 3D object to speed-up rendering at the cost of a little loss of visual accuracy.During a precomputation step, we store an image-based version of the original mesh by simply and quickly computing textures from viewpoints positionned around it by the user. During the rendering step, we use these textures in order to map on the fly colors and geometric details onto the surface of a low-polygon-count version of the mesh.Real-time rendering is achieved while combining up to three viewpoints at a time, using pixel shaders. No parameterization of the mesh is needed and occlusion effects are taken into account while computing on the fly the best viewpoints for a given pixel. Moreover, the integration of this method in common real-time rendering systems is straightforward and allows applying self-shadowing as well as other z-buffer effects.
- Published
- 2005
22. Real-time painterly rendering for MR applications
- Author
-
Daniel Sperl and Michael J. Haller
- Subjects
Parallel rendering ,Pixel shaders ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Software rendering ,Animation ,Terrain rendering ,Mixed reality ,Real-time rendering ,3D rendering ,Non-photorealistic rendering ,Rendering (computer graphics) ,Computer graphics (images) ,Computer vision ,Tiled rendering ,Artificial intelligence ,business ,Alternate frame rendering ,Texture memory ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper we describe a real-time system for AR/MR rendering applications in a painterly style. Impressionistic images are created using a large number of brush strokes, which are organized as 3d particles to achieve frame-to-frame coherence. Reference pictures are used to compute the properties of each stroke.The presented technique is based on B. J. Meier's "Painterly Rendering for Animation". We modified the algorithm of Meier for real-time AR/MR environments by extensively using modern 3d hardware. Vertex and pixel shaders allow both the rendering of thousands of brush strokes per frame and the correct application of their properties. Direct rendering to textures allows rapid generation of reference pictures and assures great flexibility, since arbitrary rendering systems can be combined (e. g. painterly rendering of toon shaded objects, etc.).
- Published
- 2004
23. Generating subdivision curves with L-systems on a GPU
- Author
-
Przemyslaw Prusinkiewicz and Radomír Měch
- Subjects
Computer Science::Graphics ,Pixel shaders ,business.industry ,Computer science ,Computer graphics (images) ,Graphics processing unit ,Graphics ,General-purpose computing on graphics processing units ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Subdivision ,Rendering (computer graphics) ,Computational science - Abstract
The introduction of floating-point pixel shaders has initiated a trend of moving algorithms from CPUs to graphics cards. The first algorithms were in the rendering domain, but recently we have witnessed increased interest in modeling algorithms as well. In this paper we present techniques for generating subdivision curves on a modern Graphics Processing Unit (GPU). We use an existing method for generating subdivision curves with L-systems, we extend these L-systems to implement adaptive subdivision, and we show how these L-systems can be implemented on a GPU. We chose L-systems because they can express many modeling algorithms in a compact way and are parallel in nature, making them an attractive paradigm for programming a GPU.
- Published
- 2003
24. Rendering higher order finite element surfaces in hardware
- Author
-
David Thompson and Rahul Khardekar
- Subjects
Pixel shaders ,Computer science ,business.industry ,Graphics hardware ,Software rendering ,Volume rendering ,Graphics pipeline ,Rendering (computer graphics) ,Rendering equation ,Computer Science::Graphics ,Computer graphics (images) ,Tiled rendering ,Graphics ,Alternate frame rendering ,business ,Texture memory ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Graphics hardware is becoming flexible enough to move more than just the traditional rendering pipeline off the CPU and onto the graphics card. We demonstrate a technique for rendering nonlinear (quadratic in our case) finite element boundaries, deflected by a vector field value and colored by a scalar field value. Simple performance measurements indicate that moving field value interpolation to the graphics card results yields roughly 50% speedup, however current hardware limits the color precision of the output. Finally, we discuss approaches to circumvent precision problems and other limitations using next generation hardware.
- Published
- 2003
25. Real-time image-space outlining for non-photorealistic rendering
- Author
-
Drew Card, Jason L. Mitchell, and Chris Brennan
- Subjects
Pixel shaders ,Pixel ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Software rendering ,Image-based modeling and rendering ,GeneralLiterature_MISCELLANEOUS ,3D rendering ,Non-photorealistic rendering ,Rendering (computer graphics) ,Computer graphics (images) ,Shadow ,Computer vision ,Shading ,Artificial intelligence ,business ,Texture mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In Non-Photorealistic Rendering (NPR), outlines at object silhouettes, shadow edges and texture boundaries are important visual cues which have previously been difficult to generate in real-time. We present an image-space technique which uses pixel shading hardware to generate these three classes of outlines in real time. In all three cases, we render alternate representations of the desired scene into texture maps which are subsequently processed by pixel shaders to find discontinuities corresponding to outlines in the scene. The outlines are then composited with the shaded scene.
- Published
- 2002
26. Perlin noise pixel shaders
- Author
-
John Hart
- Subjects
Pixel shaders ,Deferred shading ,Pixel ,business.industry ,Computer science ,Graphics hardware ,OpenGL ,Volume rendering ,Graphics pipeline ,Computer graphics (images) ,Shading ,Shading language ,business ,Perlin noise ,Shader ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
While working on a method for supporting real-time procedural solid texturing, we developed a general purpose multipass pixel shader to generate the Perlin noise function. We implemented this algorithm on SGI workstations using accelerated OpenGL PixelMap and PixelTransfer operations, achieving a rate of 2.5 Hz for a 256x256 image. We also implemented the noise algorithm on the NVidia GeForce2 using register combiners. Our register combiner implementation required 375 passes, but ran at 1.3 Hz. This exercise illustrated a variety of abilities and shortcomings of current graphics hardware. The paper concludes with an exploration of directions for expanding pixel shading hardware to further support iterative multipass pixel-shader applications.
- Published
- 2001
27. Fast volumetric deformation on general purpose hardware
- Author
-
Christof Rezk-Salama, G. Greiner, Grzegorz Soza, and Michael Scheuering
- Subjects
Vertex (computer graphics) ,Pixel shaders ,business.industry ,Computer science ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volume (computing) ,Volume rendering ,Computer graphics ,Piecewise linear function ,business ,Shader ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS ,Interpolation - Abstract
High performance deformation of volumetric objects is a common problem in computer graphics that has not yet been handled sufficiently. As a supplement to 3D texture based volume rendering, a novel approach is presented, which adaptively subdivides the volume into piecewise linear patches. An appropriate mathematical model based on tri-linear interpolation and its approximations is proposed. New optimizations are introduced in this paper which are especially tailored to an efficient implementation using general purpose rasterization hardware, including new technologies, such as vertex programs and pixel shaders. Additionally, a high performance model for local illumination calculation is introduced, which meets the aesthetic requirements of visual arts and entertainment. The results demonstrate the significant performance benefit and allow for time-critical applications, such as computer assisted surgery.
- Published
- 2001
28. Visualisation haute-qualite de forets en temps-reel a l'aide de representations alternatives
- Author
-
Senegas, Franck, Models, Algorithms and Geometry for Computer Generated Image Graphics (iMAGIS), Laboratoire d'informatique GRAphique, VIsion et Robotique de Grenoble (GRAVIR - IMAG), Université Joseph Fourier - Grenoble 1 (UJF)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria), Institut National Polytechnique Grenoble (INPG), and Fabrice Neyret
- Subjects
forest ,pixel shaders ,volume rendering ,register combiners ,per-pixel lighting ,trees ,texels ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] - Abstract
National audience; Le rendu de forêt est en général une tâche difficile, de par la quantité d'informations à représenter. L'utilisation d'approches orientées géométrie peut s'avérer cher pour une forêt de taille respectable (au moins 100 arbres dans la pyramide de vue). Des représentations alternatives existent: les texels. Originellement introduits par Kajiya pour représenter de la fourrure, il a été montré que l'on pouvait les utiliser pour rendre des arbres. Alexandre Meyer et Fabrice Neyret ont trouvé une méthode afin de les visualiser en temps-réel . Par ailleurs, la nouvelle génération du hardware graphique permet de faire du calcul par point. Ce document présente une méthode utilisant les texels pour représenter des arbres, en utilisant les fonctionnalités de pixel shading afin de simuler le shading sur les texels.
- Published
- 2001
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.