250 results on '"dynamic scene"'
Search Results
202. Dynamosaicing: Mosaicing of Dynamic Scenes.
- Author
-
Rav-Acha, Alex, Pritch, Yaei, Lischinski, Dani, and Peleg, Shmuel
- Subjects
- *
IMAGE processing , *VIDEO editing , *MOTION picture editing , *CINEMATOGRAPHY , *TIME series analysis , *VIDEOS , *PANORAMIC cameras , *PANORAMIC photography - Abstract
This paper explores the manipulation of time in video editing, which allows us to control the chronological time of events. These time manipulations include slowing down (or postponing) some dynamic events while speeding up (or advancing) others. When a video camera scans a scene, aligning all the events to a single time interval will result in a panoramic movie. Time manipulations are obtained by first constructing an aligned space-time volume from the input video, and then sweeping a continuous 2D slice (time front) through that volume, generating a new sequence of images. For dynamic scenes, aligning the input video frames poses an important challenge. We propose to align dynamic scenes using a new notion of ‘dynamics constancy,’ which is more appropriate for this task than the traditional assumption of ‘brightness constancy.’ Another challenge is to avoid visual seams inside moving objects and other visual artifacts resulting from sweeping the space-time volumes with time fronts of arbitrary geometry. To avoid such artifacts, we formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
203. Motion segmentation of multiple translating objects from line correspondences
- Author
-
Shi, Fanhuai, Wang, Jianhua, Zhang, Jing, and Liu, Yuncai
- Subjects
- *
PATTERN perception , *COMPUTER vision , *CELLULAR automata , *ALGORITHMS - Abstract
Abstract: This paper presents an analytic approach to motion segmentation of multiple translating objects from line correspondences in three perspective views. The basic idea of our algorithm is to view the estimation of multiple translational motions as the estimation of a single, though more complex, multibody motion model that is then factored into the original models by polynomial differentiation. Experimental results on synthetic and real scenes are presented. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
204. The geometry of dynamic scenes—On coplanar and convergent linear motions embedded in 3D static scenes.
- Author
-
Bartoli, Adrien
- Subjects
STATICS ,DYNAMICS ,CAMERAS ,PHYSICS - Abstract
Abstract: In this paper, we consider structure and motion recovery for scenes consisting of static and dynamic features. More particularly, we consider a single moving uncalibrated camera observing a scene consisting of points moving along straight lines converging to a unique point and lying on a motion plane. This scenario may describe a roadway observed by a moving camera whose motion is unknown. We show that there exist matching tensors similar to fundamental matrices. We derive the link between dynamic and static structure and motion and show how the equation of the motion plane (or equivalently the plane homographies it induces between images) may be recovered from dynamic features only. Experimental results on real images are provided, in particular on a 60-frames video sequence. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
205. Rain Streaks Elimination Using Image Processing Algorithms
- Author
-
Dinesh Kadam, S. V. Bonde, Krishnan Kutty, and Amol R. Madane
- Subjects
Computer science ,business.industry ,Edge Filters ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,GeneralLiterature_MISCELLANEOUS ,Rain Streaks Removal ,Gaussian Mixture Model (GMM) ,Digital image processing ,Computer vision ,Artificial intelligence ,business ,Dynamic Scene ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The paper addresses the problem of rain streak removal from videos. While, Rain streak removal from scene is important but a lot of research in this area, robust and real time algorithms is unavailable in the market. Difficulties in the rain streak removal algorithm arises due to less visibility, less illumination, and availability of moving camera and objects. The challenge that plagues rain streak recovery algorithm is detecting rain streaks and replacing them with original values to recover the scene. In this paper, we discuss the use of photometric and chromatic properties for rain detection. Updated Gaussian Mixture Model (Updated GMM) has detected moving objects. This rain streak removal algorithm is used to detect rain streaks from videos and replace it with estimated values, which is equivalent to original value. The spatial and temporal properties are used to replace rain streaks with its original values.
- Published
- 2019
- Full Text
- View/download PDF
206. A novel approach for moving object segmentation used in dynamic scene.
- Author
-
Rong, Qin, Xiaoyan, Zhang, Zhiqiang, Ma, and Ruixin, Li
- Abstract
A novel video moving object segmentation algorithm based on global motion compensation and non-parametric kernel density estimation is proposed in this paper. Firstly, an efficient and accurate global motion compensation method is used to remove the motion of background. Then the non-parametric kernel density estimation is applied to establish foreground/background probability models. Lastly, the moving object can be obtained by comparing the foreground/background probability and morphological postprocessing. Experimental results demonstrate that the proposed algorithm has good results and reduces the complexity of moving object segmentation in dynamic scene. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
207. Not all frames are equal: aggregating salient features for dynamic texture classification
- Author
-
Hong, Sungeun, Ryu, Jongbin, and Yang, Hyun S.
- Published
- 2016
- Full Text
- View/download PDF
208. Change blindness in a dynamic scene due to endogenous override of exogenous attentional cues.
- Author
-
Smith, Tim J, Lamont, Peter, and Henderson, John M
- Abstract
Change blindness is a failure to detect changes if the change occurs during a mask or distraction. Without distraction, it is assumed that the visual transients associated with the change will automatically capture attention (exogenous control), leading to detection. However, visual transients are a defining feature of naturalistic dynamic scenes. Are artificial distractions needed to hide changes to a dynamic scene? Do the temporal demands of the scene instead lead to greater endogenous control that may result in viewers missing a change in plain sight? In the present study we pitted endogenous and exogenous factors against each other during a card trick. Complete change blindness was demonstrated even when a salient highlight was inserted coincident with the change. These results indicate strong endogenous control of attention during dynamic scene viewing and its ability to override exogenous influences even when it is to the detriment of accurate scene representation. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
209. Novel Svdd-Based Algorithm For Moving Object Detecting And Tracking Under Dynamic Scenes
- Author
-
Yongqing Wang, Chunxiang Wang, and Dongfang Xu
- Subjects
dynamic scene ,object detecting and tracking ,lcsh:T ,business.industry ,Computer science ,010401 analytical chemistry ,moving object ,02 engineering and technology ,Object (computer science) ,Tracking (particle physics) ,Machine vision ,lcsh:Technology ,01 natural sciences ,0104 chemical sciences ,Control and Systems Engineering ,lcsh:Technology (General) ,support vector data description ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:T1-995 ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
Object detecting and tracking is an important technique used in diverse applications of machine vision, and has made great progress with the prevalence of artificial intelligence technology, among which the detecting and tracking moving object under dynamic scenes is more challenging for high requirements on real-time performance and reliability. Essentially analyzing, object detecting and tracking need to classify the objects and background into two different categories according to different features, where the detecting and tracking drift caused by noisy background can be effectively handled by robust maximum margin classifier, such as one-class SVM. But the time and space complexities of traditional one-class SVM methods tend to be high, which limits its wide applications to various fields. Inspired by the idea proposed by Support Vector Data Description (SVDD), in this paper we present a novel SVDD-based algorithm to efficiently deal with detecting and tracking moving object under dynamic scenes. The experimental results on synthetic, benchmark data and real-world videos demonstrate the competitive performances of the proposed method
- Published
- 2016
- Full Text
- View/download PDF
210. Local Stereo Matching Based on Support Weight With Motion Flow for Dynamic Scene
- Author
-
Wei Wei, Huanling Wang, Zhiyong Ding, Houbing Song, Zhihan Lv, and Jiachen Yang
- Subjects
General Computer Science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Field (computer science) ,Depth map ,0202 electrical engineering, electronic engineering, information engineering ,Structure from motion ,General Materials Science ,Computer vision ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics ,dynamic scene ,Stereo cameras ,business.industry ,General Engineering ,motion flow ,020206 networking & telecommunications ,Motion control ,Stereo matching ,Stereopsis ,disparity ,Filter (video) ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,support weight ,business ,lcsh:TK1-9971 ,Computer stereo vision - Abstract
Stereo matching is one of the most important and challenging subjects in the field of stereo vision. The disparity obtained in stereo matching can represent depth information in 3-D world to a great extent and shows great importance in stereo field. In general, stereo-matching methods primarily emphasize static image. However, the information provided by dynamic scene can be used fully and effectively to improve the results of stereo matching for dynamic scene, such as video sequences. In this paper, we propose a dynamic scene-based local stereo-matching algorithm which integrates a cost filter with motion flow of dynamic video sequences. In contrast to the existing local approaches, our algorithm puts forward a new computing model which fully considers motion information in dynamic video sequences and adds motion flow to calculate suitable support weight for accurately estimating disparity. Our algorithm can perform as an edge-preserving smoothing operator and shows improved behavior near the moving edges. The experimental results show that the proposed method achieves a better depth map and outperforms other local stereo-matching methods in disparity evaluation.
- Published
- 2016
- Full Text
- View/download PDF
211. Real-time 3D reconstruction techniques applied in dynamic scenes: A systematic literature review.
- Author
-
Ingale, Anupama K. and J., Divya Udayan
- Subjects
TECHNOLOGICAL progress ,TELEPORTATION ,MOTION capture (Human mechanics) - Abstract
Recent developments in capturing devices like kinect, Intel real sense camera etc., has impelled research in 3D reconstruction especially in the dynamic scene and the performance in terms of both reconstruction quality and speed has increased and thus, have supported many application like teleportation, gaming, free view point video, CG films etc. This paper provides systematic literature review of 3D reconstruction techniques applied in dynamic scene. The objective of this systematic literature review is to provide the detail technical progress in 3D reconstruction techniques for dynamic scene and to find the research gap in this field. This paper presents a systematic literature review of the current state of the art that focuses on 3D reconstruction of non-rigid object, articulated motion and human performance in real-time. We further discuss the limitations of current methods and emphasize promising technologies for future development. Search was conducted on five databases to find 3D reconstruction techniques for dynamic scene. As reconstruction of dynamic scene can be further categorized as rigid object 3D reconstruction and non-rigid object reconstruction based on the object being reconstructed in the dynamic scene. Thus we have searched for both categories for review and we have concentrated on the dynamic scene generated where object is dynamic while camera is static. 281 papers were initially searched further than after abstract screening 100 were selected later after detail study 46 were selected for systematic literature review and are presented in the table. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
212. Eye Movements during Dynamic Scene Viewing are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos.
- Author
-
Holm SK, Häikiö T, Olli K, and Kaakinen JK
- Abstract
The role of individual differences during dynamic scene viewing was explored. Participants (N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their eye movements were recorded. In addition, the participants' skills in three visual attention tasks (attentional blink, visual search, and multiple object tracking) were assessed. The results showed that individual differences in visual attention tasks were associated with eye movement patterns observed during viewing of the gameplay video. The differences were noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes and fixation distances from the center of the screen. The individual differences showed during specific events of the video as well as during the video as a whole. The results highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual differences in dynamic scene viewing., Competing Interests: This study has been evaluated by the Ethics Committee for Human Sciences at the University of Turku. According to the review statement, it has been considered to pose no major harm for participants. The authors declare no conflicts of interest.
- Published
- 2021
- Full Text
- View/download PDF
213. Change blindness in a dynamic scene due to endogenous override of exogenous attentional cues.
- Author
-
Smith, Tim J, Lamont, Peter, and Henderson, John M
- Abstract
Change blindness is a failure to detect changes if the change occurs during a mask or distraction. Without distraction, it is assumed that the visual transients associated with the change will automatically capture attention (exogenous control), leading to detection. However, visual transients are a defining feature of naturalistic dynamic scenes. Are artificial distractions needed to hide changes to a dynamic scene? Do the temporal demands of the scene instead lead to greater endogenous control that may result in viewers missing a change in plain sight? In the present study we pitted endogenous and exogenous factors against each other during a card trick. Complete change blindness was demonstrated even when a salient highlight was inserted coincident with the change. These results indicate strong endogenous control of attention during dynamic scene viewing and its ability to override exogenous influences even when it is to the detriment of accurate scene representation. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
214. Moving Objects Detection and Tracking in Dynamic Scene.
- Author
-
SHI Jia-dong and WANG Jian-zhong
- Subjects
SURVEILLANCE detection ,ALGORITHMS ,KALMAN filtering ,TRACKING radar ,DYNAMICS ,BLOCKING (Meteorology) - Abstract
A visual surveillance system is presented based on the integration of motion detection and visual tracking in static and dynamic scene image sequence. Four parameters model is established for global motion, and the model parameters estimated by block matching. Then moving objects regions were detected by Horn-Schunck algorithm with global motion vectors modified. The center of mass, width and height of moving objects were tracked by Kalman filter. Experimental results showed that this method is effective for moving objects detection and tracking in static and dynamic scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2009
215. Nuclear radiation detection based on uncovered CMOS camera under dynamic scene.
- Author
-
Yan, Zhangfa, Wei, Qingyang, Huang, Gangqin, Hu, Yulin, Zhang, Zhaohui, and Dai, Tiantian
- Subjects
- *
CAMERAS , *COMPLEMENTARY metal oxide semiconductors , *RADIATION , *RADIATION dosimetry , *VIDEO surveillance - Abstract
Nuclear technology is being promoted and applied worldwide but poses a certain threat to public safety, thus necessitating nuclear radiation detection. In addition to the use of dedicated instruments, the use of surveillance cameras with complementary metal oxide semiconductor (CMOS) sensors is a promising direction for nuclear radiation detection. Currently, such use requires masking the camera lens or working under a static environment, conditions under which a camera cannot perform surveillance monitoring. In this article, we propose a method for detecting nuclear radiation based on an uncovered CMOS camera that enables nuclear radiation detection while the camera is in surveillance monitoring mode. First, we captured videos without covering the camera lens of irradiation from a 99m Tc radioactive source. Then, an inter-frame difference algorithm and Gaussian smoothing method were used to reduce the interference of moving objects and visible light on the images. Finally, a series of thresholds were selected to test the validity of our detection method on a validation set. The experimental results indicate that the proposed method can effectively detect nuclear radiation. It is further verified that the surveillance camera can perform surveillance monitoring and radiation detection simultaneously. • A method to detect nuclear radiation with low cost and miniaturization. • Detecting nuclear radiation by an uncovered CMOS camera. • The camera can detect nuclear radiation while monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
216. Occlusion in Dynamic Scene Analysis
- Author
-
Martin, W. N., Aggarwal, J. K., Simon, J. C., editor, and Haralick, R. M., editor
- Published
- 1981
- Full Text
- View/download PDF
217. Image Understanding for Robotic Applications
- Author
-
Jain, Ramesh, Wong, Andrew K. C., editor, and Pugh, Alan, editor
- Published
- 1987
- Full Text
- View/download PDF
218. Two Multi-Processor Systems for Low-Level Real-Time Vision
- Author
-
Graefe, Volker, Brady, Michael, editor, Gerhardt, Lester A., editor, and Davidson, Harold F., editor
- Published
- 1984
- Full Text
- View/download PDF
219. A Pre-Processor for the Real-Time Interpretation of Dynamic Scenes
- Author
-
Graefe, Volker and Huang, Thomas S., editor
- Published
- 1983
- Full Text
- View/download PDF
220. Linear Filtering in Image Sequences
- Author
-
Boes, Ulrich and Huang, Thomas S., editor
- Published
- 1983
- Full Text
- View/download PDF
221. Dynamic Scene Analysis
- Author
-
Aggarwal, J. K., Martin, W. N., and Huang, Thomas S., editor
- Published
- 1983
- Full Text
- View/download PDF
222. Analyzing Dynamic Scenes Containing Multiple Moving Objects
- Author
-
Aggarwal, J. K., Martin, W. N., Fu, King Sun, editor, Huang, Thomas S., editor, and Schroeder, Manfred R., editor
- Published
- 1981
- Full Text
- View/download PDF
223. Imprecision in Computer Vision
- Author
-
Jain, Ramesh, Haynes, Susan, and Wang, Paul P., editor
- Published
- 1983
- Full Text
- View/download PDF
224. Single-class SVM for dynamic scene modeling
- Author
-
Junejo, Imran N., Bhutta, Adeel A., and Foroosh, Hassan
- Published
- 2013
- Full Text
- View/download PDF
225. Lightmap Generation and Parameterizationfor Real-Time 3D Infra-Red Scenes
- Author
-
Amjad, Meisam
- Subjects
- Computer Science, lightmap parameterization, lightmap generation, dynamic scene, Real-Time scene, Infra-Red, 3D infra-Red Scene, lightmap for heat signature, lightmap for heat sources, 3D mesh to UV mesh parameterization
- Abstract
Having high resolution Infra-Red (IR) imagery in cluttered environment of battlespace iscrucial for capturing intelligence in search and target acquisition tasks such as whether or not a vehicle (or any heat source) has been moved or used and in which direction. While 3D graphic simulation of large scenes helps with retrieving information and training analysts, using traditional 3D rendering techniques are not enough, and an additional parameter needs to be solved due to different concept of visibility in IR scenes. In 3D rendering of IR scenes, the problem of what can currently be seen by a participant of the simulation does not just depend on emitted thermal energy from objects, and the visibility also depends on previous scenes as thermal energy is slowly retained and diffused over time. Therefore, time as an additional factor must be included since the aggregation of heat energy in the scene relates to its past. Our solution uses lightmaps for storing energy that reaches surfaces over time. Wemodify the lightmaps to solve the problem of lightmap parameterization between 3D surfaces and 2D mapping and add an extra ability to let us periodically update only necessary areas based on dynamic aspects of the scene.
- Published
- 2019
226. Depth perception of moving objects viaing structured light sensor with unstructured grid.
- Author
-
Wang, Hongyu, Li, Dongxue, Wu, Chengdong, and Yu, Xiaosheng
- Abstract
Structured light (SL) has been extensively researched and developed due to reconstruction the fast and highly accurate depth sensing. In order to achieve this goal, many approaches have been taken. In this article, we have taken some innovative work in the field for building a new type of fast, high-precision depth perception based on structured light. In addition, the basic ideas we propose are as follows: our model for slow motion dynamic scenes, the sparse depth maps are obtained based on the pair of unstructured rigid pattern with different color, and the high-quality depth images are optimized by the ridge line extraction. Finally, the calculated two depth maps based on the different color pattern merged, which to acquire the optimization depth. We further propose a model for the high-speed motion dynamic scene which apply a pair of projectors structure light. We show that, we actively use motion blur to estimate depth which is different from the previous approaches. The patterns are illuminated from two projectors and the motion blur of each line is accurately measured. The scene depth information is estimated by analyzing the length of the blur. As a result, we describe two experiments that we conducted on two different experimental equipment (ball and planner board). The first experiment was conducted with the planer board, measuring the slow moving object. In addition, the second experiment scene was conducted with the ball, which obtained the depth of the ultra-fast object. Our experimental results demonstrate that our approach achieves effective real-time measurement, can successfully obtain accurate depth information of moving objects. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
227. Learning similarity metrics for dynamic scene segmentation
- Author
-
Dmitry Kit, Damien Teney, Matthew Brown, and Peter Hall
- Subjects
dynamic scene ,learning ,Similarity (geometry) ,Segmentation-based object categorization ,business.industry ,segmentation ,Supervised learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Scale-space segmentation ,Pattern recognition ,Image segmentation ,Image texture ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Segmentation ,Artificial intelligence ,distance metric ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
This paper addresses the segmentation of videos with arbitrary motion, including dynamic textures, using novel motion features and a supervised learning approach. Dynamic textures are commonplace in natural scenes, and exhibit complex patterns of appearance and motion (e.g. water, smoke, swaying foliage). These are difficult for existing segmentation algorithms, often violate the brightness constancy assumption needed for optical flow, and have complex segment characteristics beyond uniform appearance or motion. Our solution uses custom spatiotemporal filters that capture texture and motion cues, along with a novel metric-learning framework that optimizes this representation for specific objects and scenes. This is used within a hierarchical, graph-based segmentation setting, yielding state-of-the-art results for dynamic texture segmentation. We also demonstrate the applicability of our approach to general object and motion segmentation, showing significant improvements over unsupervised segmentation and results comparable to the best task specific approaches.
- Published
- 2015
- Full Text
- View/download PDF
228. Acquisition de surfaces déformables à partir d'un système multicaméra calibré
- Author
-
Cagniart, Cédric, Capture and Analysis of Shapes in Motion (MORPHEO), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS), Université de Grenoble, Edmond Boyer, Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Laboratoire Jean Kuntzmann (LJK), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Centre National de la Recherche Scientifique (CNRS)-Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), and STAR, ABES
- Subjects
Scène dynamique ,[MATH.MATH-GM]Mathematics [math]/General Mathematics [math.GM] ,Dynamic scene ,Multi-vues ,EM ,Deformable surface tracking ,Espérance-Maximisation ,Deformable registration ,[MATH.MATH-GM] Mathematics [math]/General Mathematics [math.GM] ,Expectation-Maximization ,Multi-view ,Suivi de surfaces déformables ,Alignement de surfaces - Abstract
In this thesis we address the problem of digitizing the motion of three-dimensional shapes that move and deform in time. These shapes are observed from several points of view with cameras that record the scene's evolution as videos. Using available reconstruction methods, these videos can be converted into a sequence of three-dimensional snapshots that capture the appearance and shape of the objects in the scene. The focus of this thesis is to complement appearance and shape with information on the motion and deformation of objects. In other words, we want to measure the trajectory of every point on the observed surfaces. This is a challenging problem because the captured videos are only sequences of images, and the reconstructed shapes are built independently from each other. While the human brain excels at recreating the illusion of motion from these snapshots, using them to automatically measure motion is still largely an open problem. The majority of prior works on the subject has focused on tracking the performance of one human actor, and used the strong prior knowledge on the articulated nature of human motion to handle the ambiguity and noise inherent to visual data. In contrast, the presented developments consist of generic methods that allow to digitize scenes involving several humans and deformable objects of arbitrary nature. To perform surface tracking as generically as possible, we formulate the problem as the geometric registration of surfaces and deform a reference mesh to fit a sequence of independently reconstructed meshes. We introduce a set of algorithms and numerical tools that integrate into a pipeline whose output is an animated mesh. Our first contribution consists of a generic mesh deformation model and numerical optimization framework that divides the tracked surface into a collection of patches, organizes these patches in a deformation graph and emulates elastic behavior with respect to the reference pose. As a second contribution, we present a probabilistic formulation of deformable surface registration that embeds the inference in an Expectation-Maximization framework that explicitly accounts for the noise and in the acquisition. As a third contribution, we look at how prior knowledge can be used when tracking articulated objects, and compare different deformation model with skeletal-based tracking. The studies reported by this thesis are supported by extensive experiments on various 4D datasets. They show that in spite of weaker assumption on the nature of the tracked objects, the presented ideas allow to process complex scenes involving several arbitrary objects, while robustly handling missing data and relatively large reconstruction artifacts., Cette thèse traite du suivi temporel de surfaces déformables. Ces surfaces sont observées depuis plusieurs points de vue par des caméras qui capturent l'évolution de la scène et l'enregistrent sous la forme de vidéos. Du fait des progrès récents en reconstruction multi-vue, cet ensemble de vidéos peut être converti en une série de clichés tridimensionnels qui capturent l'apparence et la forme des objets dans la scène. Le problème au coeur des travaux rapportés par cette thèse est de complémenter les informations d'apparence et de forme avec des informations sur les mouvements et les déformations des objets. En d'autres mots, il s'agit de mesurer la trajectoire de chacun des points sur les surfaces observées. Ceci est un problème difficile car les vidéos capturées ne sont que des séquences d'images, et car les formes reconstruites à chaque instant le sont indépendemment les unes des autres. Si le cerveau humain excelle à recréer l'illusion de mouvement à partir de ces clichés, leur utilisation pour la mesure automatisée du mouvement reste une question largement ouverte. La majorité des précédents travaux sur le sujet se sont focalisés sur la capture du mouvement humain et ont bénéficié de la nature articulée de ce mouvement qui pouvait être utilisé comme a-priori dans les calculs. La spécificité des développements présentés ici réside dans la généricité des méthodes qui permettent de capturer des scènes dynamiques plus complexes contenant plusieurs acteurs et différents objets déformables de nature inconnue a priori. Pour suivre les surfaces de la façon la plus générique possible, nous formulons le problème comme celui de l'alignement géométrique de surfaces, et déformons un maillage de référence pour l'aligner avec les maillages indépendemment reconstruits de la séquence. Nous présentons un ensemble d'algorithmes et d'outils numériques intégrés dans une chaîne de traitements dont le résultat est un maillage animé. Notre première contribution est une méthode de déformation de maillage qui divise la surface en une collection de morceaux élémentaires de surfaces que nous nommons patches. Ces patches sont organisés dans un graphe de déformation, et une force est appliquée sur cette structure pour émuler une déformation élastique par rapport à la pose de référence. Comme seconde contribution, nous présentons une formulation probabiliste de l'alignement de surfaces déformables qui modélise explicitement le bruit dans le processus d'acquisition. Pour finir, nous étudions dans quelle mesure les a-prioris sur la nature articulée du mouvement peuvent aider, et comparons différents modèles de déformation à une méthode de suivi de squelette. Les développements rapportés par cette thèse sont validés par de nombreuses expériences sur une variété de séquences. Ces résultats montrent qu'en dépit d'a-prioris moins forts sur les surfaces suivies, les idées présentées permettent de traiter des scènes complexes contenant de multiples objets tout en se comportant de façon robuste vis-a-vis de données fragmentaires et d'erreurs de reconstruction.
- Published
- 2012
229. Performance characterization of a high-speed stereo vision sensor for acquisition of time-varying 3D shapes
- Author
-
M. Oscar, Robert B. Fisher, and Yijun Xiao
- Subjects
Computer science ,02 engineering and technology ,Data acquisition ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Stereo cameras ,Dynamic scene ,business.industry ,Novelty ,020207 software engineering ,Stereo vision ,Characterization (materials science) ,Computer Science Applications ,Range sensor ,Range (mathematics) ,Stereopsis ,Hardware and Architecture ,Pattern recognition (psychology) ,Performance evaluation ,020201 artificial intelligence & image processing ,3D shape acquisition ,Artificial intelligence ,Computer Vision and Pattern Recognition ,business ,Computer stereo vision ,Software - Abstract
Acquisition of dynamic dense 3D shape data is of increasing importance in computer vision with applications in various disciplines. In this paper, we investigate the performance of a unique high-speed range sensor based on the stereo vision principle for 3D shape acquisition of animals. The investigation reveals some characteristics of the current version of the sensor with respect to its physical parameters, which suggest an more appropriate configuration of the sensor in real data acquisition scenarios. Due to the novelty of the sensor and the application, we believe that our evaluation of the sensor's performance will inspire new applications to follow using the dynamic 3D acquisition technology of similar types.
- Published
- 2011
- Full Text
- View/download PDF
230. Semantic segmentation–aided visual odometry for urban autonomous driving.
- Author
-
An, Lifeng, Zhang, Xinyu, Gao, Hongbo, and Liu, Yuchao
- Subjects
DRIVERLESS cars ,IMAGE segmentation ,ODOMETERS ,AUTONOMOUS robots ,ESTIMATION theory ,CITY traffic - Abstract
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
231. Real-time generation of novel views of a dynamic scene using morphing and visual hull
- Author
-
Kazumasa Yamazawa, Tomoya Ishikawa, and Naokazu Yokoya
- Subjects
video streaming ,Layout ,Computer science ,Information science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Character generation ,Omni-directional Camera ,video streams ,Computer graphics (images) ,cameras ,omnidirectional cameras ,Prototypes ,Computer vision ,dynamic scene ,Morphing ,business.industry ,Image generation ,view images ,visual hull ,Real time systems ,image processing ,Visual hull ,Streaming media ,Legged locomotion ,morphing technique ,Novel View Generation ,Artificial intelligence ,business - Abstract
Recently, generation of novel views from images acquired by multiple cameras has been investigated. It can be applied to telepresence effectively. Most conventional methods need some assumptions about the scene such as a static scene and limited positions of objects. In this paper, we propose a new method for generating novel view images of a dynamic scene with a wide view, which does not depend on the scene. The images acquired from omni-directional cameras are first divided into static regions and dynamic regions. The novel view images are then generated by applying a morphing technique to static regions and by computing visual hulls for dynamic regions in real-time. In experiments, we show that a prototype system can generate novel view images in real-time from live video streams., ICIP 2005 : 12th IEEE International Conference on Image Processing , Sep 11-14, 2005 , Genova, Italy
- Published
- 2005
232. Real-time generation of novel views of a dynamic scene using morphing and visual hull
- Author
-
Ishikawa, Tomoya, Yamazawa, Kazumasa, 10252834, Yokoya, Naokazu, Ishikawa, Tomoya, Yamazawa, Kazumasa, 10252834, and Yokoya, Naokazu
- Abstract
Recently, generation of novel views from images acquired by multiple cameras has been investigated. It can be applied to telepresence effectively. Most conventional methods need some assumptions about the scene such as a static scene and limited positions of objects. In this paper, we propose a new method for generating novel view images of a dynamic scene with a wide view, which does not depend on the scene. The images acquired from omni-directional cameras are first divided into static regions and dynamic regions. The novel view images are then generated by applying a morphing technique to static regions and by computing visual hulls for dynamic regions in real-time. In experiments, we show that a prototype system can generate novel view images in real-time from live video streams.
- Published
- 2005
233. What has been missed for predicting human attention in viewing driving clips?
- Author
-
Xu J, Yue S, Menchinelli F, and Guo K
- Abstract
Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatio-temporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the 'ground truth' to examine the predictive ability of a number of state-of-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias., Competing Interests: The authors declare there are no competing interests.
- Published
- 2017
- Full Text
- View/download PDF
234. Fast multi-exposure image fusion with median filter and recursive filter.
- Author
-
Li, Shutao and Kang, Xudong
- Subjects
- *
IMAGE analysis , *MEDIAN filters (Electronics) , *COLOR image processing , *DIGITAL cameras , *HOUSEHOLD electronics , *IMAGE converters , *EXPERIMENTS - Abstract
This paper proposes a weighted sum based multi-exposure image fusion method which consists of two main steps: three image features composed of local contrast, brightness and color dissimilarity are first measured to estimate the weight maps refined by recursive filtering. Then, the fused image is constructed by weighted sum of source images. The main advantage of the proposed method lies in a recursive filter based weight map refinement step which is able to obtain accurate weight maps for image fusion. Another advantage is that a novel histogram equalization and median filter based motion detection method is proposed for fusing multi-exposure images in dynamic scenes which contain motion objects. Furthermore, the proposed method is quite fast and thus can be directly used for most consumer cameras. Experimental results demonstrate the superiority of the proposed method in terms of subjective and objective evaluation. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
235. Real-time Realistic Rendering And High Dynamic Range Image Display And Compression
- Author
-
Xu, Ruifeng
- Subjects
- high dynamic range image, data compression, tone mapping, Monte Carlo noise, dynamic scene, complex scene, real-time rendering, realistic rendering, global illumination, Computer Sciences, Engineering
- Abstract
This dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very nature, an image of high dynamic range (HDR). This leads to the issues of the display of such images on conventional low dynamic range devices and the development of data compression algorithms to store and recover the corresponding large amounts of detail found in HDR images. This dissertation presents our contributions relevant to these issues. Our contributions to high dynamic range image processing include tone mapping and data compression algorithms. This research proposes and shows the efficacy of a novel level set based tone mapping method that preserves visual details in the display of high dynamic range images on low dynamic range display devices. The level set method is used to extract the high frequency information from HDR images. The details are then added to the range compressed low frequency information to reconstruct a visually accurate low dynamic range version of the image. Additional challenges associated with high dynamic range images include the requirements to reduce excessively large amounts of storage and transmission time. To alleviate these problems, this research presents two methods for efficient high dynamic range image data compression. One is based on the classical JPEG compression. It first converts the raw image into RGBE representation, and then sends the color base and common exponent to classical discrete cosine transform based compression and lossless compression, respectively. The other is based on the wavelet transformation. It first transforms the raw image data into the logarithmic domain, then quantizes the logarithmic data into the integer domain, and finally applies the wavelet based JPEG2000 encoder for entropy compression and bit stream truncation to meet the desired bit rate requirement. We believe that these and similar such contributions will make a wide application of high dynamic range images possible. The contributions to light transport simulation include Monte Carlo noise reduction, dynamic object rendering and complex scene rendering. Monte Carlo noise is an inescapable artifact in synthetic images rendered using stochastic algorithm. This dissertation proposes two noise reduction algorithms to obtain high quality synthetic images. The first one models the distribution of noise in the wavelet domain using a Laplacian function, and then suppresses the noise using a Bayesian method. The other extends the bilateral filtering method to reduce all types of Monte Carlo noise in a unified way. All our methods reduce Monte Carlo noise effectively. Rendering of dynamic objects adds more dimension to the expensive light transport simulation issue. This dissertation presents a pre-computation based method. It pre-computes the surface radiance for each basis lighting and animation key frame, and then renders the objects by synthesizing the pre-computed data in real-time. Realistic rendering of complex scenes is computationally expensive. This research proposes a novel 3D space subdivision method, which leads to a new rendering framework. The light is first distributed to each local region to form local light fields, which are then used to illuminate the local scenes. The method allows us to render complex scenes at interactive frame rates. Rendering has important applications in mixed reality. Consistent lighting and shadows between real scenes and virtual scenes are important features of visual integration. The dissertation proposes to render the virtual objects by irradiance rendering using live captured environmental lighting. This research also introduces a virtual shadow generation method that computes shadows cast by virtual objects to the real background. We finally conclude the dissertation by discussing a number of future directions for rendering research, and presenting our proposed approaches.
- Published
- 2005
236. Binokulární vidění
- Author
-
Fedra, Petr, Šanda, Jaroslav, Němcová, Andrea, Fedra, Petr, Šanda, Jaroslav, and Němcová, Andrea
- Abstract
Tato bakalářská práce se zabývá fyziologií binokulárního vidění pro získání prostorového vjemu z dvourozměrných obrázků pomocí brýlí. Je zde pojednáno o anatomii i fyziologii oka a vidění jako nedílných součástech binokulárního vidění, dále pak o samotném binokulárním vidění, pojmech s ním spojených a jeho vývoji v průběhu života. Důležitou součástí je popis vzniku prostorového zrakového vjemu ze dvou dvourozměrných obrázků – stereogramu. Také jsou zde popsány metody 3D projekce, a to zejména ty, které využívají aktivní či pasivní brýle, a krátce i možnosti 3D projekce bez brýlí. Praktická část obsahuje návrh snímání dynamické scény, kde jsou popsány důležité parametry, které ovlivňují proces snímání pomocí dvojice identických videorekordérů. Součástí práce je popis návrhu jednotlivých dynamických scén s ohledem na možnost ověření fyziologických parametrů u člověka. Popsané dynamické scény byly realizovány a následně upraveny ve vhodném softwaru. Natočené videosekvence byly polarizační projekcí promítnuty skupině pozorovatelů, kteří měli tato videa subjektivně i objektivně zhodnotit., This bachelor thesis deals with the physiology of binocular vision for obtaining three-dimensional perception from two-dimensional images when using special glasses. It focuses on anatomy and physiology of the human eye and vision as inseparable parts of binocular vision, and on binocular vision as such. The work also mentions terms related to vision and describes the evolution of binocular vision during the human life. The important part of this thesis is the description of the emergence of three-dimensional perception from two two-dimensional pictures (stereogram). The thesis explains the principles of 3D projection methods, especially those which use active or passive glasses, and briefly describes the possibilities of 3D projection without glasses. The practical part includes a plan of capturing dynamic scenes where important parameters which affect the video capturing are described. The plan provides a description of a dynamic scene design with respect to the possibility of verification of human physiological parameters. Described scenes were captured by two identical cameras and edited in appropriate software. These videos were shown to a group of viewers whose task was to evaluate the videos from both subjective and objective points of view.
237. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Martinů, Lukáš, Maršík, Lukáš, Musil, Martin, and Martinů, Lukáš
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
238. Binokulární vidění
- Author
-
Fedra, Petr, Šanda, Jaroslav, Němcová, Andrea, Fedra, Petr, Šanda, Jaroslav, and Němcová, Andrea
- Abstract
Tato bakalářská práce se zabývá fyziologií binokulárního vidění pro získání prostorového vjemu z dvourozměrných obrázků pomocí brýlí. Je zde pojednáno o anatomii i fyziologii oka a vidění jako nedílných součástech binokulárního vidění, dále pak o samotném binokulárním vidění, pojmech s ním spojených a jeho vývoji v průběhu života. Důležitou součástí je popis vzniku prostorového zrakového vjemu ze dvou dvourozměrných obrázků – stereogramu. Také jsou zde popsány metody 3D projekce, a to zejména ty, které využívají aktivní či pasivní brýle, a krátce i možnosti 3D projekce bez brýlí. Praktická část obsahuje návrh snímání dynamické scény, kde jsou popsány důležité parametry, které ovlivňují proces snímání pomocí dvojice identických videorekordérů. Součástí práce je popis návrhu jednotlivých dynamických scén s ohledem na možnost ověření fyziologických parametrů u člověka. Popsané dynamické scény byly realizovány a následně upraveny ve vhodném softwaru. Natočené videosekvence byly polarizační projekcí promítnuty skupině pozorovatelů, kteří měli tato videa subjektivně i objektivně zhodnotit., This bachelor thesis deals with the physiology of binocular vision for obtaining three-dimensional perception from two-dimensional images when using special glasses. It focuses on anatomy and physiology of the human eye and vision as inseparable parts of binocular vision, and on binocular vision as such. The work also mentions terms related to vision and describes the evolution of binocular vision during the human life. The important part of this thesis is the description of the emergence of three-dimensional perception from two two-dimensional pictures (stereogram). The thesis explains the principles of 3D projection methods, especially those which use active or passive glasses, and briefly describes the possibilities of 3D projection without glasses. The practical part includes a plan of capturing dynamic scenes where important parameters which affect the video capturing are described. The plan provides a description of a dynamic scene design with respect to the possibility of verification of human physiological parameters. Described scenes were captured by two identical cameras and edited in appropriate software. These videos were shown to a group of viewers whose task was to evaluate the videos from both subjective and objective points of view.
239. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Martinů, Lukáš, Maršík, Lukáš, Musil, Martin, and Martinů, Lukáš
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
240. ROVER: A Prototype Active Vision System
- Author
-
Marsh, Brian D., Coombs, David J., Marsh, Brian D., and Coombs, David J.
- Abstract
The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line CCD camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This strategy is predicated on the assumption that worst case conditions will not persist for long periods of time and the system's limited resources should be directed at the problems which are likely to yield the most results for the least effort. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well-defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.
241. ROVER: A Prototype Active Vision System
- Author
-
Marsh, Brian D., Coombs, David J., Marsh, Brian D., and Coombs, David J.
- Abstract
The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line CCD camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This strategy is predicated on the assumption that worst case conditions will not persist for long periods of time and the system's limited resources should be directed at the problems which are likely to yield the most results for the least effort. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well-defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.
242. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Maršík, Lukáš, and Musil, Martin
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
243. Binokulární vidění
- Author
-
Fedra, Petr, Šanda, Jaroslav, Fedra, Petr, and Šanda, Jaroslav
- Abstract
Tato bakalářská práce se zabývá fyziologií binokulárního vidění pro získání prostorového vjemu z dvourozměrných obrázků pomocí brýlí. Je zde pojednáno o anatomii i fyziologii oka a vidění jako nedílných součástech binokulárního vidění, dále pak o samotném binokulárním vidění, pojmech s ním spojených a jeho vývoji v průběhu života. Důležitou součástí je popis vzniku prostorového zrakového vjemu ze dvou dvourozměrných obrázků – stereogramu. Také jsou zde popsány metody 3D projekce, a to zejména ty, které využívají aktivní či pasivní brýle, a krátce i možnosti 3D projekce bez brýlí. Praktická část obsahuje návrh snímání dynamické scény, kde jsou popsány důležité parametry, které ovlivňují proces snímání pomocí dvojice identických videorekordérů. Součástí práce je popis návrhu jednotlivých dynamických scén s ohledem na možnost ověření fyziologických parametrů u člověka. Popsané dynamické scény byly realizovány a následně upraveny ve vhodném softwaru. Natočené videosekvence byly polarizační projekcí promítnuty skupině pozorovatelů, kteří měli tato videa subjektivně i objektivně zhodnotit., This bachelor thesis deals with the physiology of binocular vision for obtaining three-dimensional perception from two-dimensional images when using special glasses. It focuses on anatomy and physiology of the human eye and vision as inseparable parts of binocular vision, and on binocular vision as such. The work also mentions terms related to vision and describes the evolution of binocular vision during the human life. The important part of this thesis is the description of the emergence of three-dimensional perception from two two-dimensional pictures (stereogram). The thesis explains the principles of 3D projection methods, especially those which use active or passive glasses, and briefly describes the possibilities of 3D projection without glasses. The practical part includes a plan of capturing dynamic scenes where important parameters which affect the video capturing are described. The plan provides a description of a dynamic scene design with respect to the possibility of verification of human physiological parameters. Described scenes were captured by two identical cameras and edited in appropriate software. These videos were shown to a group of viewers whose task was to evaluate the videos from both subjective and objective points of view.
244. Binokulární vidění
- Author
-
Fedra, Petr, Šanda, Jaroslav, Fedra, Petr, and Šanda, Jaroslav
- Abstract
Tato bakalářská práce se zabývá fyziologií binokulárního vidění pro získání prostorového vjemu z dvourozměrných obrázků pomocí brýlí. Je zde pojednáno o anatomii i fyziologii oka a vidění jako nedílných součástech binokulárního vidění, dále pak o samotném binokulárním vidění, pojmech s ním spojených a jeho vývoji v průběhu života. Důležitou součástí je popis vzniku prostorového zrakového vjemu ze dvou dvourozměrných obrázků – stereogramu. Také jsou zde popsány metody 3D projekce, a to zejména ty, které využívají aktivní či pasivní brýle, a krátce i možnosti 3D projekce bez brýlí. Praktická část obsahuje návrh snímání dynamické scény, kde jsou popsány důležité parametry, které ovlivňují proces snímání pomocí dvojice identických videorekordérů. Součástí práce je popis návrhu jednotlivých dynamických scén s ohledem na možnost ověření fyziologických parametrů u člověka. Popsané dynamické scény byly realizovány a následně upraveny ve vhodném softwaru. Natočené videosekvence byly polarizační projekcí promítnuty skupině pozorovatelů, kteří měli tato videa subjektivně i objektivně zhodnotit., This bachelor thesis deals with the physiology of binocular vision for obtaining three-dimensional perception from two-dimensional images when using special glasses. It focuses on anatomy and physiology of the human eye and vision as inseparable parts of binocular vision, and on binocular vision as such. The work also mentions terms related to vision and describes the evolution of binocular vision during the human life. The important part of this thesis is the description of the emergence of three-dimensional perception from two two-dimensional pictures (stereogram). The thesis explains the principles of 3D projection methods, especially those which use active or passive glasses, and briefly describes the possibilities of 3D projection without glasses. The practical part includes a plan of capturing dynamic scenes where important parameters which affect the video capturing are described. The plan provides a description of a dynamic scene design with respect to the possibility of verification of human physiological parameters. Described scenes were captured by two identical cameras and edited in appropriate software. These videos were shown to a group of viewers whose task was to evaluate the videos from both subjective and objective points of view.
245. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Maršík, Lukáš, and Musil, Martin
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
246. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Maršík, Lukáš, and Musil, Martin
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
247. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Maršík, Lukáš, and Musil, Martin
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
248. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Martinů, Lukáš, Maršík, Lukáš, Musil, Martin, and Martinů, Lukáš
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
249. Skládání HDR obrazu pro pohyblivou scénu
- Author
-
Maršík, Lukáš, Musil, Martin, Martinů, Lukáš, Maršík, Lukáš, Musil, Martin, and Martinů, Lukáš
- Abstract
Diplomová práce se v úvodu zabývá zaznamenáním snímků s nízkým dynamickým rozsahem pomocí běžných zařízení s využitím vícenásobné expozice. Ústřední část je pak věnována skládání těchto snímků ve snímek s vysokým dynamickým rozsahem a to jak pro statickou, tak i pohyblivou scénu. Pro zobrazení HDR snímku na běžném LDR monitoru jsou dále popsány techniky mapování tonality. Navíc je zde uveden návrh a implementace aplikace řešící popsané problémy. Ta byla k ověření výkonosti a stability otestována na sérii vhodných snímků. V závěru je program vyhodnocen a uvedeno možné pokračování této práce., Master's thesis is focused on capturing of low dynamic range images using common devices such as camera and its multiple exposure. The main part of thesis is dedicated to composing these images to HDR image, inclusive sequence of images of static scenes, but also dynamic ones. Next part describes tone mapping used for display HDR image on LDR monitors. Moreover, there is given design and implementation of application solving problems mentioned earlier. In the end, the implemented application is evaluated and the possible continuation of this work is stated.
250. Методи та засоби побудови зорових образів динамічної обстановки на екранах аеронавігаційних геоінформаційних систем реального часу
- Author
-
Kredentsar, S.M.
- Subjects
инамічна сцена ,dynamic scene ,cartographic database ,аэронавигационная геоинформационная система реального времени ,зоровий образ ,cartographic background ,visual image ,картографічна база даних ,динамическая сцена ,аеронавігаційна геоінформаційна система ,зрительный образ ,картографический фон ,картографічний фон ,air-navigation geo-information system ,картографическая база данных - Abstract
Дисертація присвячена рішенню важливої науково-технічної задачі підвищення ефективності процесів представлення динамічної обстановки на екранах АНГС РЧ. Запропоновано модель побудови зорового образу динамічної обстановки та схему каналу побудови зорових образів на екрані АНГС РЧ. Розроблено метод побудови картографічного фону в АНГС РЧ. Створено модифікований алгоритм повороту складного символу, який реалізує метод базових матриць. Розроблено алгоритми відновлення картографічного фону при організації процесу переміщення складного символу. Запропоновано метод вибору набору алгоритмів побудови зорового образу, оптимізуючи швидкість його відображення. Запропоновано методику побудови зорового образу динамічної обстановки в АНГС РЧ.Диссертация посвящена решению важной научно-технической задачи повышения эффективности процессов представления динамической обстановки на экранах аэронавигационных геоинформационных систем в реальном времени. Впервые предложена модель построения зрительного образа динамической сцены, содержащая модель картографического фона и модель представления движущегося объекта, которые в соответствии с предложенными алгоритмами построения зрительного образа обеспечивают построение динамической сцены.The thesis is aimed at solving a research problem of increasing the efficiency of visualization a dynamic situation on the displays of real-time air-navigation geoinformation systems (ANGS). The model of the process and a structure of the channel of building the visual images to the display in a real-time ANGS are suggested. The method of quick building of a cartographic background has been developed. The modified algorithm for the method of base matrixes has been created for turning a complex symbol. Algorithms have been created for reconstruction of the cartographic background. A method of selecting the set of algorithms that provide building of the visual image and optimizing the speed of displaying the visual image. A procedure of building the visual images of a dynamic scene in a real-time ANGS is suggested.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.