25 results on '"Chris Varekamp"'
Search Results
2. Detection and correction of disparity estimation errors via supervised learning.
- Author
-
Chris Varekamp, Karel Hinnen, and W. Simons
- Published
- 2013
- Full Text
- View/download PDF
3. Interactive Disparity Map Post-processing.
- Author
-
Caizhang Lin, Chris Varekamp, Karel Hinnen, and Gerard de Haan
- Published
- 2012
- Full Text
- View/download PDF
4. Measurement of feet-ground interaction for real-time person localization, step characterization, and gaming.
- Author
-
Chris Varekamp and Patrick Vandewalle
- Published
- 2010
- Full Text
- View/download PDF
5. Enabling Introduction of Stereoscopic (3D) Video: Formats and Compression Standards.
- Author
-
Wilhelmus H. A. Brüls, Chris Varekamp, Rene Klein Gunnewiek, Bart Barenbrug, and Amaud Bourge
- Published
- 2007
- Full Text
- View/download PDF
6. Unsupervised motion-based object segmentation refined by color.
- Author
-
Matthijs C. Piek, Ralph Braspenning, and Chris Varekamp
- Published
- 2003
7. Philips 3D Solutions: From Content Creation to Visualization.
- Author
-
André Redert, Robert-Paul Berretty, Chris Varekamp, Oscar Willemsen, Jos Swillens, and Hans Driessen
- Published
- 2006
- Full Text
- View/download PDF
8. Overview of efficient high-quality state-of-the-art depth enhancement methods by thorough design space exploration
- Author
-
Luc P. J. Vosters, Chris Varekamp, Gerard De Haan, and Electronic Systems
- Subjects
Image plus depth ,Temporal filter ,Computational complexity theory ,Computer science ,Design space exploration ,business.industry ,Parameter space search ,020207 software engineering ,Depth enhancement ,02 engineering and technology ,Filter (signal processing) ,Benchmark ,Upsampling ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Depth upsampling ,020201 artificial intelligence & image processing ,Computer vision ,Noise (video) ,Artificial intelligence ,business ,Throughput (business) ,Information Systems - Abstract
High-quality 3D content generation requires high-quality depth maps. In practice, depth maps generated by stereo-matching, depth sensing cameras, or decoders, have low resolution and suffer from unreliable estimates and noise. Therefore, depth enhancement is necessary. Depth enhancement comprises two stages: depth upsampling and temporal post-processing. In this paper, we extend our previous work on depth upsampling in two ways. First we propose PWAS-MCM, a new depth upsampling method, and we show that it achieves on average the highest depth accuracy compared to other efficient state-of-the-art depth upsampling methods. Then, we benchmark all relevant state-of-the-art filter-based temporal post-processing methods on depth accuracy by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Then we analyze the temporal post-processing methods qualitatively. Finally, we analyze the computational complexity of each depth upsampling and temporal post-processing method by measuring the throughput and hardware utilization of the GPU implementation that we built for each method.
- Published
- 2019
9. Evaluation of efficient high quality depth upsampling methods for 3DTV.
- Author
-
Luc P. J. Vosters, Chris Varekamp, and Gerard de Haan
- Published
- 2013
- Full Text
- View/download PDF
10. Supervised disparity estimation.
- Author
-
Patrick Vandewalle and Chris Varekamp
- Published
- 2012
- Full Text
- View/download PDF
11. Challenges in 3DTV image processing.
- Author
-
André Redert, Robert-Paul Berretty, Chris Varekamp, Bart van Geest, Jan Bruijns, Ralph Braspenning, and Qingqing Wei
- Published
- 2007
- Full Text
- View/download PDF
12. Disparity map quality for image-based rendering based on multiple metrics
- Author
-
Chris Varekamp and Patrick Vandewalle
- Subjects
Sequence ,Ground truth ,Range (mathematics) ,Noise measurement ,Computer science ,business.industry ,2D to 3D conversion ,Estimator ,Computer vision ,Artificial intelligence ,Image-based modeling and rendering ,Stereo display ,business - Abstract
© 2014 IEEE. While a wide range of disparity estimation algorithms exists, only few methods are available to analyze the quality of a disparity map. We present a new two-dimensional approach to disparity map evaluation using the stereo matching error and the temporal error. These measures give a good indication of stereo or multi-view performance on a 3D display. Results are illustrated on a ground truth disparity map with multiple distortions as well as on a commercially available disparity estimator using varying post-processing settings on a real stereoscopic video sequence. ispartof: 2014 International Conference on 3D Imaging, IC3D 2014 - Proceedings ispartof: International Conference on 3D Imaging (IC3D) location:BELGIUM, Liege date:9 Dec - 10 Dec 2014 status: published
- Published
- 2014
13. Interactive disparity map post-processing
- Author
-
Gerard De Haan, Caizhang Lin, Chris Varekamp, Karel Hinnen, and Electronic Systems
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Classification of discontinuities ,Object (computer science) ,user interaction ,User input ,Data set ,Computer graphics ,contour tracking ,disparity estimation ,Image texture ,Fully automatic ,Binocular disparity ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Disparity estimation has been investigated for decades. Fully automatic methods have problems in texture-less regions, around object boundaries and in occlusions regions. In this paper, we exploit user input to address these problematic areas interactively. By drawing contours and polygons, we achieve sharp disparity discontinuities and smooth disparity planes in the disparity maps. Annotations are tracked quite accurately over a number of frames. Experimental results on Middlebury data set and our own stereo video suggest that the accuracy of disparity maps can be improved significantly with limited user input.
- Published
- 2012
14. Supervised disparity estimation
- Author
-
Chris Varekamp, Patrick Vandewalle, Woods, AJ, Holliman, NS, and Favalora, GE
- Subjects
Estimation ,Diffusion (acoustics) ,business.industry ,Computer science ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Multimedia ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Computer vision ,Pattern recognition ,Artificial intelligence ,business - Abstract
We introduce supervised disparity estimation in which an operator can steer the disparity estimation process. Instead of correcting errors, we view the estimation process as a constrained process where the constraints are indicated by the user in the form of control points, scribbles and contours. Control points are used to obtain accurate disparity estimates that can be fully controlled by the operator. Scribbles are used to force regions to have a smooth disparity, while contours create a disparity discontinuity in places where diffusion or energy minimization fail. Control points, scribbles and contours are propagated through the video sequence to create temporally stable results. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE). ispartof: Proceedings of SPIE - The International Society for Optical Engineering vol:8288 ispartof: Conference on Stereoscopic Displays and Applications XXIII (SD and A)/IS and T/SPIE Electronic Imaging - Science and Technology Symposium location:CA, Burlingame date:23 Jan - 25 Jan 2012 status: published
- Published
- 2012
15. Measurement of feet-ground interaction for real-time person localization, step characterization, and gaming
- Author
-
Chris Varekamp and Patrick Vandewalle
- Subjects
Engineering ,business.industry ,Computer graphics (images) ,Baseboard ,Stereo pair ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Artificial intelligence ,business ,Object (computer science) ,GeneralLiterature_MISCELLANEOUS - Abstract
We introduce a setup for detailedand unobtrusive feet mapping in a room. Two cameras arranged in a stereo pair detect when an object touches ground by analyzing the occlusions on a fluorescent tape that is attached to the baseboard of a room. The disparity between both cameras allows localization of a person's feet and the calculation of step size and walking speed. People areseparated from furniture such as chairs and tables by studying the occlusion duration. We present and discuss data-association and filtering algorithms and the algorithms that are needed for presence detection, step characterization and gaming. © ACM 2010. ispartof: ACM International Conference Proceeding Series status: published
- Published
- 2011
16. Improving depth maps with limited user input
- Author
-
Chris Varekamp, Patrick Vandewalle, Rene Klein Gunnewiek, Woods, AJ, Holliman, NS, and Dodgson, NA
- Subjects
Opacity ,Pixel ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Rendering (computer graphics) ,Display device ,User assistance ,Display size ,Depth map ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business - Abstract
A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance. In this paper, we discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, we will elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool. © 2010 Copyright SPIE - The International Society for Optical Engineering. ispartof: Proceedings of SPIE - The International Society for Optical Engineering vol:7524 ispartof: Conference on Stereoscopic Displays and Applications XXI location:CA, San Jose date:18 Jan - 20 Jan 2010 status: published
- Published
- 2010
17. Question interface for 3D picture creation on an autostereoscopic digital picture frame
- Author
-
Patrick Vandewalle, Chris Varekamp, and Marc de Putter
- Subjects
Computer science ,Interface (Java) ,business.industry ,Digital photo frame ,Frame (networking) ,020207 software engineering ,02 engineering and technology ,Object (computer science) ,Stereo display ,Depth map ,Autostereoscopy ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,User interface ,business - Abstract
We propose an interface for creating a depth map for a 2D picture. The image and depth map can be used for 3D display on an auto-stereoscopic photo frame. Our new interface does not require the user to draw on the picture or point at an object in the picture. Instead, semantic questions are asked about a given indicated position in the picture. This semantic information is then translated automatically into a depth map. ©2009 IEEE. ispartof: pages:13-+ ispartof: 3DTV-CON 2009 - 3rd 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, Proceedings pages:13-+ ispartof: 3DTV Conference - True Vision - Capture, Transmission and Display of 3D Video location:GERMANY, Potsdam date:4 May - 6 May 2009 status: published
- Published
- 2009
18. Challenges in 3DTV image processing
- Author
-
Jan Bruijns, Chris Varekamp, Bart van Geest, Qingqing Wei, Ralph Braspenning, André Redert, and Robert-Paul Berretty
- Subjects
Multimedia ,Computer science ,Autostereoscopy ,Digital image processing ,Bandwidth (computing) ,2D to 3D conversion ,Image processing ,Stereo display ,computer.software_genre ,computer ,Image conversion - Abstract
Philips provides autostereoscopic three-dimensional display systems that will bring the next leap in visual experience, adding true depth to video systems. We identified three challenges specifically for 3D image processing: 1) bandwidth and complexity of 3D images, 2) conversion of 2D to 3D content, and 3) object-based image/depth processing. We discuss these challenges and our solutions via several examples. In conclusion, the solutions have enabled the market introduction of several professional 3D products, and progress is made rapidly towards consumer 3DTV.
- Published
- 2007
19. Philips 3D Solutions: From Content Creation to Visualization
- Author
-
H. Driessen, R.-P. Berretty, J. Swillens, O. Willemsen, Chris Varekamp, and André Redert
- Subjects
Multimedia ,Computer science ,2D to 3D conversion ,Content creation ,Video processing ,Digital signage ,Viewing angle ,Stereo display ,computer.software_genre ,computer ,Backward compatibility ,Visualization - Abstract
Philips is realizing an end-to-end 3D display solution from 3D content creation to visualization. This development fits in our long-standing tradition of combining expertise in video processing with our strength in display development to create the most exciting and best viewing experience. Philips developed several high-quality 3D displays, ranging in resolution, viewing angle, depth experience, and sizes from 4" to 40" and up. Backwards compatibility with 2D content is enabled via signal processing or opto-electronic 3D & 2D dual mode displays. Content creation and conversion methods are provided, which are a key factor for the success of 3D displays. Fully automatic conversion from monoscopic 2D content into 3D enables the re-use of all existing 2D video material. Further methods enable 3D animation/design, 2D to 3D conversion in post- production and live capture of new 3D content. Our efforts in MPEG standardization towards the "2D-plus-depth" format for 3D video enables a flexible interface between the variety in 3D content creation methods and the range in 3D displays. Furthermore, the 3D format is compatible with existing 2D content, standards and infrastructure. Currently, Philips offers several commercial 3D products for professional use such as digital signage, and progress is being made towards consumer products such as 3DTV.
- Published
- 2006
20. Stochastic modelling of migration from polyolefins
- Author
-
Erika Helmroth, Matthijs Dekker, Chris Varekamp, and TNO Kwaliteit van Leven
- Subjects
Stochastic modelling ,proofs ,quantity ,Standard deviation ,Diffusion ,additive migration ,chemistry.chemical_compound ,foods ,component ,Leerstoelgroep Bodemnatuurkunde ,general validity ,Statistical physics ,Limit (mathematics) ,Diffusion (business) ,Analytical research ,Migration ,Mathematics ,VLAG ,Nutrition and Dietetics ,Leerstoelgroep Productontwerpen en kwaliteitskunde ,diffusion ,proportionality ,Function (mathematics) ,Polyolefins ,Polyethylene ,Product Design and Quality Management Group ,Leerstoelgroep Bodemnatuurkunde, ecohydrologie en grondwaterbeheer ,Soil Physics ,ecohydrologie en grondwaterbeheer ,Distribution (mathematics) ,chemistry ,Packaging ,polymeric packaging material ,Ecohydrology and Groundwater Management ,Soil Physics, Ecohydrology and Groundwater Management ,Probability distribution ,Agronomy and Crop Science ,Food Science ,Biotechnology - Abstract
A method is presented to predict diffusion coefficients in polyolefins using stochastic modelling. A large number of experimental diffusion coefficients, published in the literature as one dataset, was used to derive probability distributions of diffusion coefficients in the polymers low-density polyethylene and linear low-density polyethylene, medium- and high-density polyethylene, and polypropylene. An equation is proposed to describe the diffusion coefficient as a function of the molar mass of the migrant. Model parameters and standard deviations are predicted by minimizing the sum of squared errors and the residuals are used to check the assumed types of probability distribution. The experimental data can be described by a log-normal distribution. It is shown how the derived probability distributions can be used as input for migration predictions. The method presented provides information about the most likely migration results for a given packaging-food simulant combination. This is important for prediction of the probability that a given migration limit may be exceeded. © 2005 Society of Chemical Industry.
- Published
- 2005
21. Unsupervised motion-based object segmentation refined by color
- Author
-
Chris Varekamp, Matthijs C. Piek, and Ralph Braspenning
- Subjects
Segmentation-based object categorization ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,Image segmentation ,Minimum spanning tree-based segmentation ,Region growing ,Motion estimation ,Segmentation ,Computer vision ,Artificial intelligence ,Range segmentation ,business ,Mathematics - Abstract
For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is \cite{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known $K$-means segmentation method \cite{K-means}. Several other methods (e.g. \cite{kmeansc}) adapt $K$-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call $K$-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to \cite{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the $K$-regions technique, may be useful for other two-dimensional data segmentation problems.© (2003) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 2003
22. High-Resolution InSAR Image Simulation for Forest Canopies
- Author
-
D.H. Hoekman and Chris Varekamp
- Subjects
Synthetic aperture radar ,WIMEK ,Attenuation ,Autocorrelation ,Leerstoelgroep Bodemnatuurkunde, ecohydrologie en grondwaterbeheer ,Synthetic aperture radar (SAR) interferometry ,Soil Physics ,law.invention ,ecohydrologie en grondwaterbeheer ,Indonesian Radar Experiment (INDREX) campaign ,Interferometry ,Tropical forest ,law ,Histogram ,Radar imaging ,Ecohydrology and Groundwater Management ,Interferometric synthetic aperture radar ,Leerstoelgroep Bodemnatuurkunde ,Soil Physics, Ecohydrology and Groundwater Management ,General Earth and Planetary Sciences ,Electrical and Electronic Engineering ,Radar ,Image simulation ,Airborne radar ,Remote sensing - Abstract
High-resolution interferometric airborne synthetic aperture radar (SAR) images of Indonesian tropical rain forests have been acquired during the European Space Agency (ESA) Indonesian Radar Experiment (INDREX) 1996 campaign. Research efforts are directed toward development of automated canopy reconstruction algorithms. In this paper, interferometric synthetic aperture radar (InSAR) image simulation is discussed as one of the tools to support development of such inversion algorithms. First, the relevant physics, observation geometry, and radar characteristics are described. It is assumed that a forest can be modeled as a cloud of uniformly distributed isotropically scattering elements located within crown volumes. These volumes were measured during a field experiment for a 7.2 ha plot. Simulated data comprise intensity, phase, as well as coherence images. These are compared, in a statistical sense, with real data acquired in C- and X-bands. The canopy attenuation was simulated over a range of values. The normalized second intensity moment, the mean coherence magnitude, the coherence histogram, and the autocorrelation function of coherence were taken as measures for comparison. It can be concluded that simulated and real C-band images compare well for an extinction coefficient in the range of 0.15-0.3 m/sup -1/. For X-band, the selected measures of agreement lead to contradictions, indicating that the physical assumptions made may be less valid than for C-band.
- Published
- 2002
23. Segmentation of High-resolution InSAR Data of Tropical Forest using Fourier Parameterised Deformable Models
- Author
-
Chris Varekamp and Dirk Hoekman
- Subjects
Synthetic aperture radar ,WIMEK ,Computer science ,Line integral ,Boundary (topology) ,Leerstoelgroep Bodemnatuurkunde, ecohydrologie en grondwaterbeheer ,Soil Physics ,ecohydrologie en grondwaterbeheer ,Tree (data structure) ,Ecohydrology and Groundwater Management ,Interferometric synthetic aperture radar ,Soil Physics, Ecohydrology and Groundwater Management ,Leerstoelgroep Bodemnatuurkunde ,General Earth and Planetary Sciences ,Life Science ,Projection (set theory) ,Algorithm ,Smoothing ,Image gradient ,Remote sensing - Abstract
Currently, tree maps are produced from field measurements that are time consuming and expensive. Application of existing techniques based on aerial photography is often hindered by cloud cover. This has initiated research into the segmentation of high resolution airborne interferometric Synthetic Aperture Radar (SAR) data for deriving tree maps. A robust algorithm is constructed to optimally position closed boundaries. The boundary of a tree crown will be best approximated when at all points on the boundary, the z-coordinate image gradient is maximum, and directed inwards orthogonal to the boundary. This property can be expressed as the result of a line integral along the boundary. Boundaries with a large value for the line integral are likely to be tree crowns. This paper focuses on the search procedure and on illustrating how smoothing can be used to prevent the search from becoming trapped in a local optimum. The final crown detection stage is not described in this paper but could be based on the gradient and implemented using the above described value for the line integral. Results of this paper indicate that a Fourier parametrization with only three harmonics (nine parameters) can describe the shape variation in the 2D crown projection in sufficient detail. Current ground datasets are not suitable for obtaining detection statistics such as the percentage of tree crowns detected and the number of false alarms. Better ground datasets will be needed to evaluate algorithm performance for real tree mapping situations.
- Published
- 2000
24. Observation of tropical rain forest trees by airborne high-resolution interferometric radar
- Author
-
Chris Varekamp and Dirk Hoekman
- Subjects
Synthetic aperture radar ,Backscatter ,Meteorology ,law.invention ,remote sensing ,tropical rain forests ,Aerial photography ,law ,Radar imaging ,Leerstoelgroep Bodemnatuurkunde ,Van Cittert–Zernike theorem ,Life Science ,Electrical and Electronic Engineering ,Radar ,Remote sensing ,WIMEK ,Leerstoelgroep Bodemnatuurkunde, ecohydrologie en grondwaterbeheer ,Soil Physics ,ecohydrologie en grondwaterbeheer ,Interferometry ,tropische regenbossen ,Ecohydrology and Groundwater Management ,Soil Physics, Ecohydrology and Groundwater Management ,General Earth and Planetary Sciences ,Geology ,radar ,Coherence (physics) - Abstract
The Indonesian Radar Experiment (INDREX) Campaign was executed in Indonesia to study the potential of high-resolution interferometric airborne radar in support of sustainable tropical forest management. Severe cloud cover limits the use of aerial photography, which is currently applied on a routine basis to extract information at the tree level. Interferometric radar images may be a viable alternative once radar imaging at the tree level is sufficiently understood. It is shown that interferometric height images can contain large height and displacement errors for individual trees but that this problem can be solved to a large extent using models for the vertical distribution of backscatter intensity and an extension of the Van Cittert-Zernike theorem. The predicted loss of coherence in lay-over regions of emergent trees is shown to be in good agreement with the loss of coherence as observed in the high resolution radar data (Pierson correlation coefficient=0.94). Several correction methods for height and displacement errors are proposed. It is shown that a simple approach already gives a good correction. Semi-empirical correction models, which can be calibrated for forest structure, perform even better.
- Published
- 2000
25. Evaluation of efficient high quality depth upsampling methods for 3DTV
- Author
-
L. P. J. Vosters, Chris Varekamp, G. de Haan, and Electronic Systems
- Subjects
Upsampling ,Noise ,Computer engineering ,Computer science ,business.industry ,Benchmark (computing) ,Computer vision ,Filter (signal processing) ,Instrumentation (computer programming) ,Artificial intelligence ,Image-based modeling and rendering ,business ,Interpolation - Abstract
High quality 3D content generation requires high quality depth maps. In practice, depth maps generated by stereo-matching, depth sensingcameras, or decoders, have a low resolution and suffer from unreliable estimates and noise. Therefore depth post-processing is necessary. In this paper we benchmark state-of-the-art filter based depth upsampling methods on depth accuracy and interpolation quality by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Additionally, we analyze each method’s computational complexity with the big O notation and we measure the runtime of the GPU implementation that we built for each method. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.