24 results on '"S. Langer"'
Search Results
2. Depth from Defocus on a Transmissive Diffraction Mask-based Sensor
- Author
-
Michael S. Langer, Ji-Ho Cho, and Neeth Kunnath
- Subjects
Diffraction ,Pixel ,business.industry ,Computer science ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,Ambiguity ,Grating ,Symmetric function ,Cardinal point ,Optics ,Kernel (image processing) ,Depth map ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,media_common - Abstract
In traditional depth from defocus (DFD) models, the blur kernel is a symmetric function whose width is proportional to the absolute distance in diopters between the scene point and the focal plane. A symmetric blur kernel implies a two-fold front-back ambiguity in the depth estimates, however. To resolve this ambiguity using only a single image of a scene, one typically introduces an asymmetry into the optics. Here we propose a fast and simple solution which uses a Transmissive Diffraction Mask (TDM), namely a transmissive grating placed directly in front the sensors. The grating is vertically oriented with a period of two pixels, and yields two interleaved images which both have asymmetric blur kernels. The sum of the two kernels behaves like a traditional symmetric blur kernel, and so we can apply a classical single-image edge-based DFD method to the sum of the two TDM images to estimate depth up to a sign ambiguity at each point. We then show how to use the difference of the two TDM images to resolve this sign ambiguity. The result is a sparse depth map which one can interpolate to a dense depth map using standard techniques.
- Published
- 2020
- Full Text
- View/download PDF
3. What is a Good Model for Depth from Defocus?
- Author
-
Fahim Mannan and Michael S. Langer
- Subjects
Depth from defocus ,Diffraction ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Gaussian blur ,Inverse ,02 engineering and technology ,01 natural sciences ,Gaussian approximation ,010309 optics ,symbols.namesake ,Computer Science::Graphics ,Kernel (image processing) ,Computer Science::Computer Vision and Pattern Recognition ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Computer vision ,Coded aperture ,Artificial intelligence ,Deconvolution ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Different models for estimating depth from defocused images have been proposed over the years. Typically two differently defocused images are used by these models. Many of them work on the principle of transforming one or both of the images so that the transformed images become equivalent. One of the most common models is to estimate the relative blur between a pair of defocused images and compute depth from it. Another model known as the Blur Equalization Technique (BET) works by blurring both images by an appropriate pair of blur kernels. The inverse approach is to deblur both images by an appropriate pair of blur kernels. In this paper we compare the performance of these models to find under what conditions they work best. We show that the common approach of using the Gaussian approximation of the relative blur kernel performs worse than a more general approximation of the relative blur kernel. Furthermore, we show that despite the reduction in signal content in BET, it works well in most circumstances. Finally, the performance of deconvolution based approaches depends on a large part on the shape of the blur kernel and is more appropriate for the coded aperture setup.
- Published
- 2016
- Full Text
- View/download PDF
4. Blur Calibration for Depth from Defocus
- Author
-
Michael S. Langer and Fahim Mannan
- Subjects
Depth from defocus ,Pixel ,Aperture ,business.industry ,Computer science ,Core component ,Gaussian ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Gaussian blur ,020207 software engineering ,Ranging ,02 engineering and technology ,symbols.namesake ,Computer Science::Graphics ,Kernel (image processing) ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Depth from defocus based methods rely on measuring the depth dependent blur at each pixel of the image. A core component in the defocus blur estimation process is the depth variant blur kernel. This blur kernel is often approximated as a Gaussian or pillbox kernel which only works well for small amount of blur. In general the blur kernel depends on the shape of the aperture and can vary a lot with depth. For more accurate blur estimation it is necessary to precisely model the blur kernel. In this paper we present a simple and accurate approach for performing blur kernel calibration for depth from defocus. We also show how to estimate the relative blur kernel from a pair of defocused blur kernels. Our proposed approach can estimate blurs ranging from small (single pixel) to sufficiently large (e.g. 77 x 77 in our experiments). We also experimentally demonstrate that our relative blur estimation method can recover blur kernels for complex asymmetric coded apertures which has not been shown before.
- Published
- 2016
- Full Text
- View/download PDF
5. Optimal Camera Parameters for Depth from Defocus
- Author
-
Michael S. Langer and Fahim Mannan
- Subjects
Depth from defocus ,Kernel (image processing) ,Computer science ,Error variance ,business.industry ,Aperture ,Focal length ,Computer vision ,Artificial intelligence ,Aperture ratio ,business ,Finite aperture ,Upper and lower bounds - Abstract
Pictures taken with finite aperture lenses typically have out-of-focus regions. While such defocus blur is useful for creating photographic effects, it can also be used for depth estimation. In this paper, we look at different camera settings for Depth from Defocus (DFD), the conditions under which depth can be estimated unambiguously for those settings and optimality of different settings in terms of lower bound of error variance. We present results for general camera settings, as well as two of the most widely used camera settings namely, variable aperture and variable focus. We show that for variable focus, the range of depth needs to be larger than twice the focal length to unambiguously estimate depth. We analytically derive the optimal aperture ratio, and also show that there is no single optimal parameter for variable focus. Furthermore we show how to choose focus in order to minimize error variance in a particular region of the scene.
- Published
- 2015
- Full Text
- View/download PDF
6. Omnistereo Video Textures without Ghosting
- Author
-
Sébastien Roy, Michael S. Langer, and Vincent Couture
- Subjects
Panorama ,Stereo cameras ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,GeneralLiterature_MISCELLANEOUS ,Image stitching ,Image texture ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,Image sensor ,business ,Ghosting ,Computer stereo vision ,Stereo camera ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
An omni stereo pair of images provides depth information from stereo up to 360 degrees around a central observer. A method for synthesizing omni stereo video textures was recently introduced which was Based on blending of overlapping stereo videos that were filmed several seconds apart. While it produced loop able omni stereo videos that can be displayed up to 360 degrees around a viewer, ghosting was visible within blended overlaps. This paper presents a stereo stitching method to render these overlaps without any ghosting. The stitching method uses a graph-cut minimization. We show results for scenes with differents types of motion, such as water flows, leaves in the wind and moving people.
- Published
- 2013
- Full Text
- View/download PDF
7. Integrating immersive 3D worlds and real lab equipment for teaching mechatronics
- Author
-
D. Muller, S. Langer, and A. Chilliischi
- Subjects
Engineering ,Virtual instrumentation ,business.industry ,Interoperability ,Virtual reality ,Mechatronics ,computer.software_genre ,Mixed reality ,Virtual machine ,Human–computer interaction ,Interfacing ,Embedded system ,The Internet ,business ,computer - Abstract
In this paper, a learning environment for mechatronics training is proposed, which combines immersive 3D worlds and real lab equipment. A key feature of the system is the tight coupling between virtual and real mechatronics by implementing a system, which allows interfacing an immersive 3D World with physical sensors and actuators. A communication link between a microcontroller and a virtual mechatronics device is available. The virtual system is modeled in OpenSimulator and users are able to control mechatronics hardware via a 3D virtual environment and vice versa. The proposed solution opens interoperability both with local and remote lab devices. This includes remote experimentation in which all lab equipment is real and accessible via the Internet, but also examples in which real devices interact with simulation models created in a 3D virtual world. The concept is open for further enhancements.
- Published
- 2012
- Full Text
- View/download PDF
8. Panoramic stereo video textures
- Author
-
Michael S. Langer, Vincent Couture, and Sébastien Roy
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,GeneralLiterature_MISCELLANEOUS ,Visualization ,law.invention ,Image stitching ,Lens (optics) ,Stereopsis ,Circular motion ,Image texture ,law ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,Stereo camera ,Smoothing ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
A panoramic stereo (or omnistereo) pair of images provides depth information from stereo up to 360 degrees around a central observer. Because omnistereo lenses or mirrors do not yet exist, synthesizing omnistereo images requires multiple stereo camera positions and baseline orientations. Recent omnistereo methods stitch together many small field of view images called slits which are captured by one or two cameras following a circular motion. However, these methods produce omnistereo images for static scenes only. The situation is much more challenging for dynamic scenes since stitching needs to occur over both space and time and should synchronize the motion between left and right views as much as possible. This paper presents the first ever method for synthesizing panoramic stereo video textures. The method uses full frames rather than slits and uses blending across seams rather than smoothing or matching based on graph cuts. The method produces loopable panoramic stereo videos that can be displayed up to 360 degrees around a viewer.
- Published
- 2011
- Full Text
- View/download PDF
9. Performance of Stereo Methods in Cluttered Scenes
- Author
-
Michael S. Langer and Fahim Mannan
- Subjects
Monocular ,Markov random field ,Pixel ,Computer science ,business.industry ,Visibility (geometry) ,Binary number ,Boundary (topology) ,Context (language use) ,Computer vision ,Artificial intelligence ,Iterative reconstruction ,business - Abstract
This paper evaluates the performance of different stereo formulations in the context of cluttered scenes [1] with large number of binocular-monocular boundaries (i.e. occlusion boundaries). Three stereo methods employing three different constraints are considered. These are basic [2], [3](Basic), uniqueness [4] (KZ-uni), and visibility [5](KZ-vis). Scenes for the experiments are synthetically generated and some are shown to have significantly more occlusion boundaries than the Middlebury scenes. This allows evaluating the methods with different types of scenes to understand the efficacy of different constraints for cluttered scenes. The evaluation considers mislabeled pixels of different types (binocular/monocular) in different regions (on or away from occlusion boundary). We have found that for sparse scenes (fewer occlusion boundaries) all three methods have similar performance. For dense scenes the performance is dominated by pixels on the boundary. For binocular pixels Basic always does better but for monocular pixels KZ-vis has the lowest error. If binary occlusion labeling is considered then the cross-checked version of basic constraint Basic-cc performs best followed by KZ-uni.
- Published
- 2011
- Full Text
- View/download PDF
10. Removing Partial Occlusion from Blurred Thin Occluders
- Author
-
Michael S. Langer, Scott McCloskey, and Kaleem Siddiqi
- Subjects
Pixel ,Aperture ,Computer science ,business.industry ,media_common.quotation_subject ,Image processing ,law.invention ,Lens (optics) ,Position (vector) ,law ,Contrast (vision) ,Computer vision ,Artificial intelligence ,Partial occlusion ,business ,Image restoration ,media_common - Abstract
We present a method to remove partial occlusion that arises from out-of-focus thin foreground occluders such as wires, branches, or a fence. Such partial occlusion causes the irradiance at a pixel to be a weighted sum of the radiances of a blurred foreground occluder and that of the background. The result is that the background component has lower contrast than it would if seen without the occluder. In order to remove the contribution of the foreground in such regions, we characterize the position and size of the occluder in a narrow aperture image. In subsequent images with wider apertures, we use this characterization to remove the contribution of the foreground, thereby restoring contrast in the background. We demonstrate our method on real camera images without assuming that the background is static.
- Published
- 2010
- Full Text
- View/download PDF
11. Planar orientation from blur gradients in a single image
- Author
-
Michael S. Langer and Scott McCloskey
- Subjects
Pixel ,business.industry ,Aperture ,Orientation (computer vision) ,Edge detection ,Tilt (optics) ,Image texture ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Pinhole (optics) ,Artificial intelligence ,business ,Focus (optics) ,Mathematics - Abstract
We present a focus-based method to recover the orientation of a textured planar surface patch from a single image. The method exploits the relationship between the orientation of equifocal (i.e. uniformly-blurred) contours in the image and the plane's tilt and slant angles. Compared to previous methods that determine planar orientation, we make fewer assumptions about the texture and remove the restriction that images must be acquired through a pinhole aperture. Our method estimates slant and tilt of an image patch in a single image, as compared to depth from defocus methods that require two or more input images. Experiments are performed using a large set of test images.
- Published
- 2009
- Full Text
- View/download PDF
12. A Cue to Shading: Elongations near Intensity Maxima
- Author
-
Michael S. Langer and D. Gipsman
- Subjects
business.industry ,Higher-order statistics ,Geometry ,Surface finish ,Computer Science::Graphics ,Skewness ,Human visual system model ,Computer vision ,Shading ,Artificial intelligence ,Elongation ,Maxima ,business ,Intensity (heat transfer) - Abstract
The human visual system is often able to recognize shading patterns and to discriminate them from surface reflectance patterns. To understand how this ability is possible, we investigate what makes shading patterns special. We study a statistical property of shading patterns, namely that they tend to be more elongated near intensity maxima. Second-order derivatives of shading and of surface height are compared, and it is shown that intensities typically have an elongated structure relative to surface heights, and this elongation is more extreme near intensity maxima. This elongation property is formalized in terms of a skewness statistic on the aspect ratios of iso-intensity vs. iso-height curves.
- Published
- 2008
- Full Text
- View/download PDF
13. Creating and implementing medical technologies
- Author
-
Robert S. Langer
- Subjects
Engineering ,Engineering management ,Materials science and technology ,business.industry ,Information technology ,business - Published
- 2008
- Full Text
- View/download PDF
14. Two-frame frequency-based estimation of local motion parallax direction in 3D cluttered scenes
- Author
-
V. Couture, Richard Mann, Michael S. Langer, and A. Caine
- Subjects
Motion field ,business.industry ,Frequency domain ,Motion estimation ,Structure from motion ,Computer vision ,Artificial intelligence ,Observer (special relativity) ,Frame rate ,business ,Parallax ,Quarter-pixel motion ,Mathematics - Abstract
When an observer moves in a 3D static scene, the resulting motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. It is well-known that the vector difference - or motion parallax - between nearby image motion field vectors points toward the direction of heading and so computing this vector difference can help in estimating the heading direction. For 3D cluttered scenes that contain many objects at many different depths, it can be difficult to compute local image motion vectors because these scenes have many depth discontinuities which corrupt local motion estimates and thus it is unclear how to estimate local motion parallax. Recently a frequency domain method was proposed to address this problem which uses the space-time power spectrum of a sequence of images. The method requires a large number of frames, however, and assumes the observer's motion is constant within these frames. Here we present a frequency-based method which uses two frames only and hence does not suffer from the limitations of the previously proposed method. We demonstrate the effectiveness of the new method using both synthetic and natural images.
- Published
- 2007
- Full Text
- View/download PDF
15. Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes?
- Author
-
V. Couture and Michael S. Langer
- Subjects
Lucas–Kanade method ,Motion field ,Parallax occlusion mapping ,business.industry ,Motion estimation ,Optical flow ,Parallax mapping ,Computer vision ,Observer (special relativity) ,Artificial intelligence ,Parallax ,business ,Mathematics - Abstract
When an observer moves in a 3D static scene, the motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. By computing the difference between nearby motion field vectors, the observer can estimate the direction of local motion parallax and in turn the direction of heading. It has recently been argued that, in 3D cluttered scenes such as a forest, computing local image motion using classical optical flow methods is problematic since these classical methods have problems at depth discontinuities. Hence, estimating local motion parallax from optical flow should be problematic as well. In this paper we evaluate this claim. We use the classical Lucas-Kanade method to estimate optical flow and the Rieger-Lawton method to estimate the direction of motion parallax from the estimated flow. We compare the motion parallax estimates to those of the frequency based method of Mann-Langer. We find that if the Lucas-Kanade estimates are sufficiently pruned, using both an eigenvalue condition and a mean absolute error condition, then the Lucas- Kanade/Rieger-Lawton method can perform as well as or better than the frequency-based method.
- Published
- 2007
- Full Text
- View/download PDF
16. The Reverse Projection Correlation Principle for Depth from Defocus
- Author
-
Michael S. Langer, Scott McCloskey, and Kaleem Siddiqi
- Subjects
Depth from defocus ,Pixel ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Correlation ,Computer Science::Computer Vision and Pattern Recognition ,Radiance ,Computer vision ,Artificial intelligence ,Invariant (mathematics) ,business ,Image restoration ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
In this paper, we address the problem of finding depth from defocus in a fundamentally new way. Most previous methods have used an approximate model in which blurring is shift invariant and pixel area is negligible. Our model avoids these assumptions. We consider the area in the scene whose radiance is recorded by a pixel on the sensor, and relate the size and shape of that area to the scene?s position with respect to the plane of focus. This is the notion of reverse projection, which allows us to illustrate that, when out of focus, neighboring pixels will record light from overlapping regions in the scene. This overlap results in a measurable change in the correlation between the pixels? intensity values. We demonstrate that this relationship can be characterized in such a way as to recover depth from defocused images. Experimental results show the ability of this relationship to accurately predict depth from correlation measurements.
- Published
- 2006
- Full Text
- View/download PDF
17. Motion Parallax without Motion Compensation in 3D Cluttered Scenes
- Author
-
Michael S. Langer, Sébastien Roy, Richard Mann, and Vincent Chapdelaine-Couture
- Subjects
Motion compensation ,Motion field ,business.industry ,Computer science ,Motion estimation ,Linear motion ,Optical flow ,Structure from motion ,Computer vision ,Artificial intelligence ,Parallax ,business ,Quarter-pixel motion - Abstract
When an observer moves through a rigid 3D scene, points that are near to the observer move with a different image velocity than points that are far away. The difference between image velocity vectors is the direction of motion parallax. This direction vector points towards the observer's translation direction. Hence estimates of the direction of motion parallax are useful for estimating the observer's translation direction. Standard ways to compute the direction of motion parallax either rely on precomputed optical flow, or rely on motion compensation to remove the local image shift caused by observer rotation. Here we present a simple Fourier-based method for estimating the direction of motion parallax directly, that does not require optical flow and motion compensation. The method is real-time and performs accurately for image regions in which multiple motions are present.
- Published
- 2006
- Full Text
- View/download PDF
18. Hardware-Accelerated Simulated Radiography
- Author
-
Nelson Max, Dan Laney, R.J. Frank, S. Langer, Cláudio T. Silva, and Steven P. Callahan
- Subjects
Floating point ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volume rendering ,Image texture ,Mesh generation ,Computer graphics (images) ,Physics::Accelerator Physics ,Hardware acceleration ,Polygon mesh ,Hexahedron ,business ,Dykstra's projection algorithm ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester.
- Published
- 2006
- Full Text
- View/download PDF
19. Seeing Around Occluding Objects
- Author
-
Kaleem Siddiqi, Scott McCloskey, and Michael S. Langer
- Subjects
Pixel ,Plane (geometry) ,business.industry ,media_common.quotation_subject ,Iterative reconstruction ,Classification of discontinuities ,Radiance ,Contrast (vision) ,Computer vision ,Artificial intelligence ,Projection (set theory) ,Focus (optics) ,business ,media_common ,Mathematics - Abstract
This paper presents a novel method for the removal of unwanted image intensity due to occluding objects far from the plane of focus. Such occlusions may arise in scenes with large depth discontinuities, and result in image regions where both the occluding and background objects contribute to pixel intensities. The contribution of the occluding objects radiance is modeled by reverse projection, and can be removed from this region by a simple operation on the pixels intensity. Experimental results demonstrate our ability to accurately recover the backgrounds appearance despite significant occlusion. As compared with processing based a linear model of occlusion, the results show lower error and a more accurate contrast.
- Published
- 2006
- Full Text
- View/download PDF
20. Ultrabroadband spectral transfer in extended focal zones: truncated few-cycle Bessel-Gauss beams
- Author
-
Uwe Neumann, G. Steinmeyer, S. Langer, Rüdiger Grunwald, J.-L. Neron, Gero Stibenz, Michel Piché, and Volker Kebbel
- Subjects
Physics ,business.industry ,Gauss ,Bandwidth (signal processing) ,Astrophysics::Instrumentation and Methods for Astrophysics ,Nonlinear optics ,symbols.namesake ,Optics ,symbols ,Physics::Accelerator Physics ,High harmonic generation ,Spatial frequency ,business ,Decimetre ,Bessel function ,Gaussian beam - Abstract
By self-apodized truncation of few-cycle pulsed Bessel-Gauss beams, focal zones of decimeter decay lengths were shaped. Both, theoretical and experimental results indicate more robust spatio-spectral characteristics at ultrabroad bandwidths compared to Gaussian beam foci.
- Published
- 2005
- Full Text
- View/download PDF
21. Characterization of ultrashort pulses with spatio-temporal and angular resolution
- Author
-
J.-L. Neron, Rüdiger Grunwald, Gero Stibenz, Michel Piché, Volker Kebbel, S. Langer, G. Steinmeyer, Uwe Neumann, and Uwe Griebner
- Subjects
Wavefront ,Physics ,business.industry ,Optical autocorrelation ,Autocorrelation ,Astrophysics::Instrumentation and Methods for Astrophysics ,Physics::Optics ,Wavefront sensor ,Deformable mirror ,Optics ,Angular resolution ,business ,Adaptive optics ,Image resolution - Abstract
Recent progress in wavefront autocorrelation of ultrashort pulses is reported. The Shack-Hartmann wavefront sensor concept is extended to spatially resolved autocorrelation. Specific advantages and limitations of system design, dynamic range, and optical components are discussed.
- Published
- 2005
- Full Text
- View/download PDF
22. Estimating camera motion through a 3D cluttered scene
- Author
-
Michael S. Langer and Richard Mann
- Subjects
business.industry ,Estimation theory ,Computer science ,Optical flow ,Observer (special relativity) ,Classification of discontinuities ,Motion field ,Match moving ,Computer Science::Computer Vision and Pattern Recognition ,Motion estimation ,Structure from motion ,Computer vision ,Artificial intelligence ,business - Abstract
Previous methods for estimating the motion of an observer through a static scene require that image velocities can be measured. For the case of motion through a cluttered 3D scene, however, measuring optical flow is problematic because of the high density of depth discontinuities. This paper introduces a method for estimating motion through a cluttered 3D scene that does not measure velocities at individual points. Instead the method measures a distribution of velocities over local image regions. We show that motion through a cluttered scene produces a bowtie pattern in the power spectra of local image regions. We show how to estimate the parameters of the bowtie for different image regions and how to use these parameters to estimate observer motion. We demonstrate our method on synthetic and real data sequences.
- Published
- 2004
- Full Text
- View/download PDF
23. Towards accurate recovery of shape from shading under diffuse lighting
- Author
-
Michael S. Langer and A.J. Stewart
- Subjects
Surface (mathematics) ,business.industry ,Computer science ,Orientation (computer vision) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Radiosity (computer graphics) ,Iterative reconstruction ,Real image ,Computer graphics ,Photometric stereo ,Radiance ,Computer vision ,Diffuse reflection ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a surface radiance model for diffuse lighting that incorporates shadows, interreflections, and surface orientation. We show that, for smooth surfaces, the model is an excellent approximation of the radiosity equation. We present a new data structure and algorithm that uses this model to compute shape-from-shading under diffuse lighting. The algorithm was tested on both synthetic and real images, and performs more accurately than the only previous algorithm for this problem. Various causes of error are discussed, including approximation errors in image modelling, poor local constraints at the image boundary, and ill-conditioning of the problem itself.
- Published
- 1996
- Full Text
- View/download PDF
24. A ray-based computational model of light sources and illumination
- Author
-
Steven W. Zucker and Michael S. Langer
- Subjects
Physics ,Global illumination ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Function (mathematics) ,Image segmentation ,Ray ,Light scattering ,Constraint (information theory) ,Optics ,Radiance ,Computer vision ,Artificial intelligence ,business ,Light field ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Models of light sources and illumination are fun damental in physics-based vision. Light sources have traditionally been modelled as objects that emit light, while illumination has been modelled as a radiance function defined over free space. The difficulty with traditional light source models is that, while a diverse collection of models exists, each is so specific that relationships between them, and between the algorithms based on them, are unclear. At the other extreme, models of spatial illumination have been so general that they have not provided sufficient constraint for vision. We seek to unify these two extremes by de veloping a ray-based computational model of light sources and illumination. sources and illumination. The model articulates strong constraints on the geometry and radiance of light rays in a scene, and expresses the relationship between free space, sources, and illumination. Appli cations of this model to problems in computer vision are discussed.
- Published
- 1995
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.