780 results
Search Results
2. Collaborative use of mobile augmented reality with paper maps
- Author
-
Morrison, Ann, Mulloni, Alessandro, Lemmelä, Saija, Oulasvirta, Antti, Jacucci, Giulio, Peltonen, Peter, Schmalstieg, Dieter, and Regenbrecht, Holger
- Subjects
- *
AUGMENTED reality , *GAME theory , *MOBILE communication systems , *COMPUTER graphics , *COLLECTIVE action - Abstract
Abstract: The popularity of augmented reality (AR) applications on mobile devices is increasing, but there is as yet little research on their use in real-settings. We review data from two pioneering field trials where MapLens, a magic lens that augments paper-based city maps, was used in small-group collaborative tasks. The first study compared MapLens to a digital version akin to Google Maps, the second looked at using one shared mobile device vs. using multiple devices. The studies find place-making and use of artefacts to communicate and establish common ground as predominant modes of interaction in AR-mediated collaboration with users working on tasks together despite not needing to. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
3. Teaching the basics of computer graphics in virtual reality.
- Author
-
Heinemann, Birte, Görzen, Sergej, and Schroeder, Ulrik
- Subjects
- *
VIRTUAL reality , *COMPUTERS in education , *TECHNOLOGICAL innovations , *COMPUTER graphics , *SCHOOL environment , *TELEPORTATION - Abstract
New technology such as virtual reality can help computer graphics education, for example, by providing the opportunity to illustrate challenging 3D procedures. RePiX VR is a virtual reality tool for computer graphics education that focuses on teaching the core ideas of the rendering pipeline. This paper describes the development and two initial evaluations, which aimed to strengthen the usability, review requirements for different stakeholders, and build infrastructure for learning analytics and research. The integration of learning analytics raises the question of appropriate indicators to be approached through exploratory data analysis. In addition to learning analytics, the evaluation includes quantitative techniques to get insights about usability, and didactical feedback. This paper discusses advanced aspects of learning in VR and looks specifically at movement behavior. According to the evaluations, even learners without prior experience can utilize the VR tool to pick up the fundamentals of computer graphics. [Display omitted] • Evaluated educational VR environment for teaching Computer Graphics. • Teaching Computer Graphics in Virtual Reality is promising. • Learners have various movement and teleportation pattern and different interaction behavior. • A comparison of Desktop and VR users shows differences between groups, as well as a comparison of novices and experts. • The evaluation contains Multimodal Learning Analytics, Quantitative Feedback, and Usability aspects. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. GMM-ICQ: A GMM vertex-optimization-based implicitly-connected quadrilateral format for 3D mesh storage.
- Author
-
Lin, Dayong, Zhao, Chunhui, Tian, Qihang, Xu, Yunfei, Wang, Ruilin, and Qu, Zonghua
- Subjects
- *
GAUSSIAN mixture models , *MICROSOFT Surface (Computer) , *QUADRILATERALS , *COMPUTER graphics , *STORAGE - Abstract
3D meshes are commonly utilized and may be considered to be the most popular surface representation in computer graphics due to their simplicity, efficiency and flexibility. However, the explicit storage of mesh vertices and connectivity, as in widely-used PLY and OBJ file formats, leads to substantial memory consumption. This, in turn, directly affects the processing and transmission in downstream applications. Though mesh simplification and mesh compression are common strategies to lessen memory consumption, they exhibit inherent limitations either in maintaining a balance between accuracy, efficiency, memory usage and mesh quality, or breaking the simplicity of explicit storage and struggling with optimizing the trade-off between compression performance and computational resource consumption. To overcome these limitations, inspired by the Gaussian Mixture Model (GMM), this paper proposes a GMM vertex-optimization-based implicitly-connected quadrilateral format for 3D mesh storage, named GMM-ICQ. Extensive qualitative and quantitative evaluations demonstrate that the GMM-ICQ format achieves efficient compression by retaining only a small amount of vertex information, while preserving sharp features and maintaining relatively high mesh quality. It also exhibits a certain degree of robustness in the presence of noise interference. Furthermore, benefiting from the inherent grid-based connectivity, the GMM-ICQ format maintains the simplicity of explicit storage and can be implemented as a progressive variant without incurring additional computational overhead. [Display omitted] • We present a GMM vertex-optimization-based implicitly-connected quadrilateral format for 3D mesh storage. • Simultaneously balances accuracy, efficiency, memory usage, and mesh quality. • Preserves the simplicity of explicit storage (such as PLY and OBJ). • No additional computational overhead needed for progressive variant implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. SketchCleanNet — A deep learning approach to the enhancement and correction of query sketches for a 3D CAD model retrieval system.
- Author
-
Manda, Bharadwaj, Kendre, Prasad Pralhad, Dey, Subhrajit, and Muthuganapathy, Ramanathan
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *COMPUTER vision , *COMPUTER graphics , *ENGINEERING design , *SEARCH algorithms - Abstract
Search and retrieval remains a major research topic in several domains, including computer graphics, computer vision, engineering design, etc. A search engine requires primarily an input search query and a database of items to search from. In engineering, which is the primary context of this paper, the database consists of 3D CAD models, such as washers, pistons, connecting rods, etc. A query from a user is typically in the form of a sketch, which attempts to capture the details of a 3D model. However, sketches have certain typical defects such as gaps, over-drawn portions (multi-strokes), etc. Since the retrieved results are only as good as the input query, sketches need cleaning-up and enhancement for better retrieval results. In this paper, a deep learning approach is proposed to improve or clean the query sketches. Initially, sketches from various categories are analysed in order to understand the many possible defects that may occur. A dataset of cleaned-up or enhanced query sketches is then created based on an understanding of these defects. Consequently, an end-to-end training of a deep neural network is carried out in order to provide a mapping between the defective and the clean sketches. This network takes the defective query sketch as the input and generates a clean or an enhanced query sketch. Qualitative and quantitative comparisons of the proposed approach with other state-of-the-art techniques show that the proposed approach is effective. The results of the search engine are reported using both the defective and enhanced query sketches, and it is shown that using the enhanced query sketches from the developed approach yields improved search results. [Display omitted] • The first learning-based strategy to clean rough query sketches of 3D CAD models • Introduces SketchCleanNet — an end-to-end image translation scheme • SketchCleanNet aims to understand the mapping between rough sketches and clean query images • A novel scheme to calculate the loss is introduced • Dataset Contribution: The resulting enhanced query sketch dataset is made available publicly. • This paper will significantly contribute to the research community and give researchers opportunities to develop new algorithms for search and retrieval of 3D mechanical components. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Random screening-based feature aggregation for point cloud denoising.
- Author
-
Wang, Weijia, Pan, Wei, Liu, Xiao, Su, Kui, Rolfe, Bernard, and Lu, Xuequan
- Subjects
- *
POINT cloud - Abstract
Raw point clouds captured by sensing devices are often contaminated with noise, which perturbs the fidelity of the original geometric information. Point cloud denoising is therefore an inseparable post-processing step, aiming to remove the noise in the point clouds. Existing point cloud denoising approaches are typically trained on datasets that have uniform point distributions and densities, making them unsuitable for effectively denoising point clouds with severe noise or irregular point distributions. In this paper, we introduce a novel random screening-based feature aggregation method for point cloud denoising. Our key insight is that merging features of dense and sparse points assists with enhancing the quality of point cloud denoising results. In specific, our approach involves randomly screening the features of local point patches and fusing richer geometric information of denser points into sparser point representations. Comprehensive experiments demonstrate that our method achieves state-of-the-art performance in the point cloud denoising task on both synthetic and real-world datasets. [Display omitted] • Random screening-based point feature aggregation is useful for point cloud denoising. • Merging features of dense points into sparser points boosts point cloud denoising. • Our pipeline demonstrates robustness to noise and saves processing time. • Our technique achieves impressive results on severe noise and irregular points. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. A truncated generalized Huber prior for image smoothing.
- Author
-
Li, Fang and Li, Tingting
- Subjects
- *
COMPUTER vision , *COMPUTER graphics , *CONVEX functions , *LEAST squares , *JPEG (Image coding standard) - Abstract
• We propose a nonconvex prior for image smoothing. • The proposed algorithm of the nonconvex model has a convergence guarantee. • The performance of the prior outperforms the state-of-the-art. • The proposed method is flexible, convergent and has promising results. Image smoothing is a fundamental task in computer vision and graphics. This paper presents a new image smoothing method based on a truncated generalized Huber prior. The proposed model is neither convex nor concave and is hard to optimize. We first transform the prior into a concave one, then utilize the technique of half-quadratic minimization to get an equivalent convex surrogate function. Thus the numerical algorithm is obtained by solving a weighted least square problem and iteratively updating the weights. The convergence of the algorithm is theoretically guaranteed. The proposed method is flexible and powerful in preserving edge/structure and eliminating undesired information. The effectiveness of the proposed method is demonstrated by several applications, including scale space filtering, texture removal, and clip-art JPEG artifacts removal. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Constructing a game engine: A proposed game engine architecture course for undergraduate students.
- Author
-
Del Gallego, Neil Patrick
- Abstract
This paper presents a structured series of lessons for teaching undergraduate computer science or game development students how to construct a game engine using C++ and standard graphics APIs such as OpenGL or DirectX. Our proposed course content discusses rendering topics, game object management, scene management, and rigid body physics where learners are tasked to incorporate them into a fully functional prototype scene editor. By incorporating actual coding into the course, students can gain a better understanding of low-level engine features and also hone their programming skills. In addition to the learning content, we provide recommended assessment methods. We include discussions about our teaching experience in handling a game engine architecture course, titled GDENG03, which 36 students have passed. Our paper stands out from related works by providing a framework for interested educators on how to write a 3D game engine and how to teach the same in class. Based on our actual delivery of GDENG03, all 36 students achieved high scores of > 80 % out of 100 which demonstrates the effectiveness of our learning content. We received positive student feedback, such as a better appreciation of game engines, while suggestions for course improvement were raised, such as using simpler terminologies and better workload management. Based on student scores and feedback, we can attribute the success of our students' learning to our inclusion of practical coding examples and discussions, highlighting their relevance to mainstream game engines. The learning materials and accompanying source code can be accessed in the following link:. • A learning content is proposed that aims to teach undergraduate students how to develop their own game engine using only C++ and a graphics API. Learners are taught how to create a functional 3D scene editor. • Assessment methods are presented for assessing a student's mastery of the game engine architecture course. • Positive feedback and experiences were reported by students during the actual course delivery. • Comprehensive materials, including the scene editor's working source code and PowerPoint files, are publicly accessible at this link:. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Contact-conditioned hand-held object reconstruction from single-view images.
- Author
-
Wang, Xiaoyuan, Li, Yang, Boukhayma, Adnane, Wang, Changbo, and Christie, Marc
- Subjects
- *
COMPUTER vision , *OBJECT tracking (Computer vision) , *PRIOR learning , *COMPUTER graphics - Abstract
Reconstructing the shape of hand-held objects from single-view color images is a long-standing problem in computer vision and computer graphics. The task is complicated by the ill-posed nature of single-view reconstruction, as well as potential occlusions due to both the hand and the object. Previous works mostly handled the problem by utilizing known object templates as priors to reduce the complexity. In contrast, our paper proposes a novel approach without knowing the object templates beforehand but by exploiting prior knowledge of contacts in hand-object interactions to train an attention-based network that can perform precise hand-held object reconstructions with only a single forward pass in inference. The network we propose encodes visual features together with contact features using a multi-head attention module as a way to condition the training of a neural field representation. This neural field representation outputs a Signed Distance Field representing the reconstructed object and extensive experiments on three well-known datasets demonstrate that our method achieves superior reconstruction results even under severe occlusion compared to the state-of-the-art techniques. • Conditioning on contact, we reconstruct the hand-held object from single-view images. • We use an end-to-end attention-based network to better encode contact priors. • We achieve state-of-the-art results on Obman, HO3D, and MOW datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Computer Graphics Algorithm Demonstration System.
- Author
-
Wang, Fang, Yu, Kan, and Yang, Lei
- Subjects
COMPUTER algorithms ,COMPUTER graphics ,COMPUTER engineering ,TEACHING demonstrations ,MANUFACTURING processes ,EFFECTIVE teaching - Abstract
With the rapid development of computer technology. Many scholars pay close attention to computer graphics algorithms, which makes computer graphics widely used in various fields and enters a new revolutionary era. The purpose of this paper is to study the evaluation of computer graphics algorithm demonstration systems. Combining theory with practice, on the basis of careful analysis of the principle of graphics algorithm, a new idea of synchronization of graphics production process and a specific algorithm for realizing graphics are proposed, and most graphics in computer graphics are visualized. It breaks the limitation that only two-dimensional graphic simulation effects can be realized, and increases the interactive user control system. The algorithm complexity is significantly reduced, the graphics creation speed is increased, and the simulation effect is more realistic. After the introduction of this visual teaching demonstration system, these abstract principles can be explained intuitively, which can significantly improve the teaching quality. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Graph matching as a graph convolution operator for graph neural networks.
- Author
-
Martineau, Maxime, Raveaux, Romain, Conte, Donatello, and Venturini, Gilles
- Subjects
- *
CONVOLUTIONAL neural networks , *DEEP learning , *GRAPH algorithms , *COMPUTER graphics , *SIGNAL convolution , *SIGNAL filtering - Abstract
• In this paper we propose a new Convolutional Neural Network operating on graph space. • We define new convolution and pooling operators applied directly to graphs. • We prove that our operators can be used with backpropagation techniques. • We prove experimentally that the architecture reaches the state of the art in a classification context. [Display omitted] Convolutional neural networks (CNNs), in a few decades, have outperformed the existing state of the art methods in classification context. However, in the way they were formalised, CNNs are bound to operate on euclidean spaces. Indeed, convolution is a signal operation that are defined on euclidean spaces. This has restricted deep learning main use to euclidean-defined data such as sound or image. And yet, numerous computer application fields (among which network analysis, computational social science, chemo-informatics or computer graphics) induce non-euclideanly defined data such as graphs, networks or manifolds. In this paper we propose a new convolution neural network architecture, defined directly into graph space. Convolution and pooling operators are defined in graph domain thanks to a graph matching procedure between the input signal and a filter. We show its usability in a back-propagation context. Experimental results show that our model performance is at state of the art level on simple tasks. It shows robustness with respect to graph domain changes and improvement with respect to other euclidean and non-euclidean convolutional architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Deep weathering effects.
- Author
-
Verhulst, Adrien, Normand, Jean-Marie, Moreau, Guillaume, and Patow, Gustavo
- Subjects
- *
WEATHERING , *COMPUTER graphics , *HUMIDITY , *PLASTER - Abstract
Weathering phenomena are ubiquitous in urban environments, where it is easy to observe severely degraded old buildings as a result of water penetration. Despite being an important part of any realistic city, this kind of phenomenon has received little attention from the Computer Graphics community compared to stains resulting from biological or flow effects on the building exteriors. In this paper, we present physically-inspired deep weathering effects, where the penetration of humidity (i.e., water particles) and its interaction with a building's internal structural elements result in large, visible degradation effects. Our implementation is based on a particle-based propagation model for humidity propagation, coupled with a spring-based interaction simulation that allows chemical interactions, like the formation of rust, to deform and destroy a building's inner structure. To illustrate our methodology, we show a collection of deep degradation effects applied to urban models involving the creation of rust or of ice within walls. [Display omitted] • Physically-inspired simulation of the penetration of water particles in walls, dealing with multiple types of materials. • Approximate simulation of the interactions of water with the inner structures of the wall. • Expansion and deformation, such as water into ice, rusting, or plaster deformation. • Simulation of the formation of cracks inside the wall, leading to broken concrete, loose plaster, etc. • Cracks can accelerate previous steps if increased exposure (e.g., rain). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Symmetry detection of occluded point cloud using deep learning.
- Author
-
Wu, Zhelun, Jiang, Hongyan, and He, Siyun
- Subjects
POINT cloud ,SYMMETRY ,PROBLEM solving ,COMPUTER graphics ,DEEP learning ,BIG data ,LEARNING problems - Abstract
Before our paper, some papers have approached symmetry detection in various attacks of lines. Deep learning has taken off in recent years in many fields in computer graphics, and we are thinking of using big data to solve the symmetry detection problem. Our work aims to solve a niche problem: using deep learning to detect symmetry in objects of the occluded point cloud. As far as we know, we are the first piece of work to deal with such a problem in deep learning settings. We employ points on the symmetry plane and normal vectors as double supervision to help us pinpoint the symmetry plane. Experiments conducted on the YCB-video dataset prove the effectiveness of the work and our method. To see our implementation, please visit https://github.com/Allen--Wu/dense_symmetry. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Computational and topological properties of neural networks by means of graph-theoretic parameters.
- Author
-
Khan, Asad, Hayat, Sakander, Zhong, Yubin, Arif, Amina, Zada, Laiq, and Fang, Meie
- Subjects
ARTIFICIAL neural networks ,TOPOLOGICAL property ,NEURAL computers ,DEEP learning ,ARTIFICIAL intelligence ,COMPUTER graphics ,GRAPH theory - Abstract
A neural network is a computer system modeled on the nerve tissue and nervous system. In this sense, neural networks refer to systems of neurons, either organic or artificial in nature. Neural networks have diverse applications in computer graphics, artificial intelligence (AI), machine & deep learning, chemistry, material science, among others. Research on structural properties and their classification of favorable or unfavorable for neural networks has been started recently. Employing tools from mathematics and specifically graph theory has been a key research direction in this area. Different graph-theoretic parameters have potential applications in studying topological properties of neural networks. In this paper for the first time, we study some nondeterministic polynomial-time (NP)-complete and NP-hard problems on various neural networks by considering their graphs. We consider classes of 3- and 4-layered probabilistic neural networks (PNNs), cellular neural networks (CNNs), and Tickysym spiking neural networks (TSNNs). In order to study their topological properties, we study their structures of maximal cliques, minimal colorings, maximal independent sets, maximal and perfect matchings, and minimum dominating sets. Computational results on the clique number, chromatic number, independence number, matching ratio and the domination number are reported. For instance, the clique number of the probabilistic neural networks is 2, the Tickysym spiking neural network is 3, whereas, for the cellular neural network, it is 4. These numerical results quantify the incoming/outgoing traffic in these architectures. Similarly, the independence number of 25-nodal 3-layered probabilistic neural network is 15, whereas, the independence number of 25-nodal cellular neural network is just 9. Moreover, the asymptotic matching ratio for PNNs (resp. CNNs) is 1 2 (resp. 1 8 ), whereas, it is 1 6 for the case of TSNNs. This gives probabilistic neural networks a significant priority in controllability over cellular neural networks. Some open problems are proposed in the end to further this research area. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Approximating global illumination with ambient occlusion and environment light via generative adversarial networks.
- Author
-
Abbas, Fayçal, Malah, Mehdi, and Babahenini, Mohamed Chaouki
- Subjects
- *
GENERATIVE adversarial networks , *LIGHTING , *SOLAR radiation , *COMPUTER graphics - Abstract
• Global illumination based on generative adversarial network. • Attention mechanism for ambient occlusion approximation. • Generative adversarial network for environment light approximation. • Generated images are often indistinguishable from those calculated by a physics-based renderer (Mitsuba). [Display omitted] Calculating global illumination in computer graphics is difficult, especially for complex scenes. This is due to the interreflections of the rays of light and the interactions with the materials of the objects composing the scene. Solutions based on ambient light approximations have been implemented. However, these are computationally intensive and produce images with less precision, as these solutions ignore the ambient lighting component by adopting coarse approximations. In this paper, we propose a method capable of approximating the global illumination effect. Our idea is to compute the global illumination by adding three images (direct illumination, environmental light, and ambient occlusion). Direct illumination is calculated by a reference method. Environmental illumination is computed using an adversarial neural network from a single 2D image. Ambient occlusion is generated using conditional adversarial neural networks with an attention mechanism to pay more attention to the relevant image features during the training step. We use two image masks to keep the object's position in the screen space, which allows efficient reconstruction of the final result. Our solution produces quality images compared to reference images and does not require any computation in the 3D scene or screen space. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Monte Carlo: A flexible and accurate technique for modeling light transport in food and agricultural products.
- Author
-
Hu, Dong, Sun, Tong, Yao, Lijian, Yang, Zidong, Wang, Aichen, and Ying, Yibin
- Subjects
- *
FARM produce , *FOOD transportation , *OPTICAL computing , *OPTICAL properties , *COMPUTER graphics , *FOOD , *MONTE Carlo method - Abstract
Monte Carlo (MC) has been widely used in fields such as biomedicine and computer graphics owing to its unique capabilities of flexibility, high-accuracy and simplicity for modeling light transport in tissues, but the applications in food and agricultural domain are limited or hindered due to the lack of knowledge on the optical properties of food products. Thanks to major breakthroughs in optical measuring and computing technologies since the year of 2000, significant advances have been made in sensing techniques for measuring tissue optical properties. Therefore, MC has witnessed great progress in food and agricultural domain over the past two decades. The development of MC for modeling light transport in food and agricultural products, including the principle, advanced MC methods, relevant applications, and future perspectives were reviewed. The paper is aimed at helping interested researchers to gain a better understanding of the MC technique, thus stimulating quality and safety assessment of food and agricultural products. This paper provides an overview of the procedure of MC modeling for light transport in food and agricultural products and commonly used MC models. Advanced methods for accelerating MC simulations are then presented. Applications of MC simulations in food and agricultural products, since the year of 2000, for optimizing the design of sensing configuration and parameter, estimating tissue optical property, and assessing quality and safety are then reviewed. Finally, challenges and future perspectives for MC technique in modeling light transport are discussed. • Procedure of MC for modeling light transport and different MC models were described. • Advanced methods for accelerating MC simulations were presented. • Applications of MC in food and agricultural products were reviewed. • Challenges and future perspectives for MC modeling of light transport were discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. ReviewerNet: A visualization platform for the selection of academic reviewers.
- Author
-
Salinas, Mario, Giorgi, Daniela, Ponchio, Federico, and Cignoni, Paolo
- Subjects
- *
VISUALIZATION , *CITATION networks , *DATA visualization , *CONTENT analysis , *RESEARCH teams , *COMPUTER graphics - Abstract
• An integrated visualization of scholarly data can support the academic reviewer search process. • The visualization of scholarly data helps to avoid conflicts of interest and to build a fairly distributed pool of reviewers. • A well-combined visualization of citations and co-authorship relations only, can reduce the need for complicated content analysis techniques. • The platform evaluation with members from the Computer Graphics community demonstrated the improvement on the traditional process for searching reviewers. • The evaluation confirmed that users were able to get acquainted with the system with a very limited training. We propose ReviewerNet, an online, interactive visualization system aimed to improve the reviewer selection process in the academic domain. Given a paper submitted for publication, we assume that good candidate reviewers can be chosen among the authors of a small set of pertinent papers; ReviewerNet supports the construction of such set of papers, by visualizing and exploring a literature citation network. The system helps journal editors and Program Committee members to select reviewers that do not have any conflict-of-interest and are representative of different research groups, by visualising the careers and co-authorship relations of candidate reviewers. The system is publicly available, and is demonstrated in the field of Computer Graphics. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Boundary particle resampling for surface reconstruction in liquid animation.
- Author
-
Sandim, Marcos, Oe, Nicolas, Cedrim, Douglas, Pagliosa, Paulo, and Paiva, Afonso
- Subjects
- *
SURFACE reconstruction , *LIQUID surfaces , *COMPUTATIONAL physics , *COMPUTER graphics , *MICROSOFT Surface (Computer) , *MULTIPLE correspondence analysis (Statistics) - Abstract
• We present a novel particle resampling method for surface reconstruction of a liquid. • The first adaptive sampling tailored to level-sets defined by the boundary particles. • Our adaptive particle resampling preserves small and thin features of a liquid. • A new quality metric to evaluate the effectiveness of the resampling methods. In this paper, we present a novel adaptive particle resampling method tailored for surface reconstruction of level-sets defined by the boundary particles from a particle-based liquid simulation. The proposed approach is simple and easy to implement, and only requires the positions of the particles to identify and refine regions with small and thin fluid features accurately. The method comprises four main stages: boundary detection, feature classification, particle refinement, and surface reconstruction. For each simulation frame, firstly the free-surface particles are captured through a boundary detection method. Then, the boundary particles are classified and labeled according to the deformation and the stretching of the free-surface computed from the Principal Component Analysis (PCA) of the particle positions. The particles placed at feature regions are then refined according to their feature classification. Finally, we extract the free-surface of the zero level-set defined by the resampled boundary particles and its normals. In order to render the free-surface, we demonstrate how the traditional methods of surface fitting in Computer Graphics and Computational Physics literature can benefit from the proposed resampling method. Furthermore, the results shown in the paper attest the effectiveness and robustness of our method when compared to state-of-the-art adaptive particle resampling techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Developable mesh segmentation by detecting curve-like features on Gauss images.
- Author
-
Zeng, Zheng, Jia, Xiaohong, Shen, Liyong, and Bo, Pengbo
- Subjects
- *
NUMERICAL control of machine tools , *COMPUTER graphics , *COMPUTER-aided design , *PARAMETRIC equations , *ISOGEOMETRIC analysis - Abstract
Developable surfaces are widely used in Computer-Aided Design (CAD), computer graphics, architecture geometry and manufacturing. In this paper, we present an approach to segmentation and approximation of an input triangular mesh with developable patches. First, exact developable regions are detected and extracted from the input mesh. Then the non-developable region is further segmented and approximated also by developable patches. The parametric equations for all obtained developable patches are simultaneously computed, whose iso-parameter curves can serve directly as the milling paths in CNC machining. Examples and comparisons with existing works are provided. [Display omitted] • A novel method for segmenting and approximating an input mesh with developable patches is proposed. • The exact parametric equations for all segment patches are computed. • A new linearity and smoothness measures to detect curve-like features on Gauss Images is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Digital geometry on a cubic stair-case mesh.
- Author
-
Nagy, Benedek and Saadat, MohammadReza
- Subjects
- *
TILING (Mathematics) , *MATHEMATICAL formulas , *PIXELS , *IMAGE processing , *IMAGE analysis , *GEOMETRY , *COMPUTER graphics - Abstract
• The cubic grid Z 3 plays important role in the (digital) world. • Meshes obtained by cutting Z 3 by oblique planes can display 2d images. • The dual semi-regular grid, the rhombille tiling is an oblique cubic mesh. • Shortest path algorithm and digital distance are computed on the rhombille tiling. • The rhombille tiling can be used as a digital canvas and for RGB pixel geometry. In this paper, we investigate digital geometry on the rhombille tiling, D(6,3,6,3), that is the dual of the semi-regular tiling called hexadeltille T(6,3,6,3) tiling and also known as trihexagonal tiling. In fact, this tiling can be seen as an oblique mesh of the cubic grid giving practical importance to this specific grid both in image processing and graphics. The properties of the coordinate systems used to address the tiles are playing crucial roles in the simplicity of various algorithms and mathematical formulae of digital geometry that allow to work on the grid in image processing, image analysis and computer graphics, thus we present a symmetric coordinate system. This coordinate system has a strong relation to topological/combinatorial coordinate system of the cubic grid. It is an interesting fact that greedy shortest path algorithm may not be used on this grid, despite to this, we present algorithm to provide a minimal-length path between each pair of tiles, where paths are defined as sequences of neighbor tiles (those are considered to be neighbors which share a side). We also prove closed formula for computing the digital, i.e., path-based distance, the length (the number of steps) of a/the shortest path(s). Some example pictures on this grid are also presented, as well as its possible application as pixel geometry for color images and videos on the hexagonal grid. To create your abstract, type over the instructions in the template box below. Fonts or abstract dimensions should not be changed or altered. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Multi-GPU 3D k-nearest neighbors computation with application to ICP, point cloud smoothing and normals computation.
- Author
-
Agathos, Alexander and Azariadis, Philip
- Subjects
- *
COMPUTER vision , *POINT cloud , *REVERSE engineering , *COMPUTER graphics , *PARALLEL algorithms , *K-nearest neighbor classification - Abstract
The k-Nearest Neighbors algorithm is a fundamental algorithm that finds applications in many fields like Machine Learning, Computer Graphics, Computer Vision, and others. The algorithm determines the closest points (d-dimensional) of a reference set R according to a query set of points Q under a specific metric (Euclidean, Mahalanobis, Manhattan, etc.). This work focuses on the utilization of multiple Graphical Processing Units for the acceleration of the k-Nearest Neighbors algorithm with large or very large sets of 3D points. With the proposed approach the space of the reference set is divided into a 3D grid which is used to facilitate the search for the nearest neighbors. The search in the grid is performed in a multiresolution manner starting from a high-resolution grid and ending up in a coarse one, thus accounting for point clouds that may have non-uniform sampling and/or outliers. Three important algorithms in reverse engineering are revisited and new multi-GPU versions are proposed based on the introduced KNN algorithm. More specifically, the new multi-GPU approach is applied to the Iterative Closest Point algorithm, to the point cloud smoothing, and to the point cloud normal vectors computation and orientation problem. A series of tests and experiments have been conducted and discussed in the paper showing the merits of the proposed multi-GPU approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. An efficient maximum bound principle preserving p-adaptive operator-splitting method for three-dimensional phase field shape transformation model.
- Author
-
Wang, Yan, Xiao, Xufeng, and Feng, Xinlong
- Subjects
- *
COMPUTER graphics , *COMPUTER science , *THREE-dimensional modeling , *MAXIMUM principles (Mathematics) , *PARALLEL algorithms , *EXTRAPOLATION - Abstract
In this paper, a novel numerical algorithm for efficient modeling of three-dimensional shape transformation governed by the modified Allen-Cahn (A-C) equation is developed, which has important significance for computer science and graphics technology. The new idea of the proposed method is as follows. Firstly, the operator splitting method is used to decompose the three-dimensional problem into a series of one-dimensional subproblems that can be solved in parallel in the same direction. Secondly, a temporal p-adaptive strategy, which is based on the extrapolation technique, is proposed to improve the convergence order in time and preserve the computational efficiency simultaneously. Finally, a parallel least distance modification technique is developed to force the discrete maximum bound principle. The proposed method achieves high precision and high efficiency at the same time. Numerical examples include the effectiveness of the p-adaptive method and the bound preserving least distance modification, and a series of complex three-dimensional shape transformation modelings. • The operator splitting method is used to solve 3D shape transformation PDE. • A temporal p-adaptive strategy is developed to improve computational efficiency. • A least-distance modification is developed to force the discrete maximum bound. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Hybrid modeling of Lagrangian–Eulerian method for high-speed fluid simulation.
- Author
-
Wang, Changbo, Zhang, Shenfan, Li, Chen, and Qin, Hong
- Subjects
- *
COMPUTER simulation , *FLUID mechanics , *LAGRANGE equations , *EULER method , *COMPUTER graphics - Abstract
Highlights • The Lagrangian-Eulerian method can handle violent liquid and spray with particle system and density field respectively. • Four different participating media are actively involved to enhance details and enrich rendering. • Vivid interaction and state transition among different media and tight coupling between particle and grid are guaranteed. Graphical abstract Abstract This paper proposes a hybrid Lagrangian–Eulerian method involving all participating media for high-speed fluid simulation and its accompanying phenomena. We take advantages of both particle- and grid-based solvers in our new approach by dividing all the physical states into two different parts based on their individual characteristics. Lagrangian-based solver gives rise to liquid, droplet, and foam creation and facilitates the drastic shape distortion and state transition during the multi-physical processes simulation from the perspective of individual particles. At the same time, Eulerian-based solver mainly focuses on the animation of spray, which enables a fog-like atmosphere. Essential to all of the aforementioned functionalities is the state transition among different media considering both physical and geometric features in this paper. The state transition enables to streamline the dis-association and generation of certain physical states. Moreover, to ensure the synchronization of hybrid models, we establish essential interaction and build tight coupling between the particle- and grid-based subsystems. With the comprehensive experimental results, we have manifested that our new modeling approach is both effective and highly modularized, and is capable of producing convincing visual effects for a wider range of graphics and animation tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Atmospheric cloud modeling methods in computer graphics: A review, trends, taxonomy, and future directions.
- Author
-
Zamri, Muhamad Najib and Sunar, Mohd Shahrizal
- Subjects
ATMOSPHERIC models ,COMPUTER graphics ,COMPUTER simulation ,CLOUD computing ,TAXONOMY - Abstract
The modeling of atmospheric clouds is one of the crucial elements in the natural phenomena visualization system. Over the years, a wide range of approaches has been proposed on this topic to deal with the challenging issues associated with visual realism and performance. However, the lack of recent review papers on the atmospheric cloud modeling methods available in computer graphics makes it difficult for researchers and practitioners to understand and choose the well-suited solutions for developing the atmospheric cloud visualization system. Hence, we conducted a comprehensive review to identify, analyze, classify, and summarize the existing atmospheric cloud modeling solutions. We selected 113 research studies from recognizable data sources and analyzed the research trends on this topic. We defined a taxonomy by categorizing the atmospheric cloud modeling methods based on the methods' similar characteristics and summarized each of the particular methods. Finally, we underlined several research issues and directions for potential future work. The review results provide an overview and general picture of the atmospheric cloud modeling methods that would be beneficial for researchers and practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. A computational approach for 3D modeling and integration of heterogeneous geo-data.
- Author
-
Miola, Marianna, Cabiddu, Daniela, Pittaluga, Simone, Mortara, Michela, Vetuschi Zuccolini, Marino, and Imitazione, Gianmario
- Subjects
- *
GEOLOGICAL modeling , *SUBSOILS , *GEOLOGICAL statistics , *SOIL sampling , *GEOLOGICAL surveys , *COMPUTER graphics , *GEOPHYSICS - Abstract
This paper tackles the volumetric representation of geophysical and geotechnical data, gathered during exploration surveys of the subsoil; in particular, we focus on the modeling and analysis of underwater deposits. The creation of a 3D model as support to geological interpretation has to take into account the heterogeneity of the input data, coming from offshore acquisition campaigns. Some data are massive, but cover the domain unevenly, e.g., along dense differently spaced lines, while others are very sparse, e.g., borehole locations with soil sampling and CPTU (Piezocone Penetration Test) locations. A automatic process is presented to generate the subsurfaces and volume defining a sub-seabed deposit, starting from the identification of relevant morphological features in seismic data. In particular, simplification and refinement based on geostatistics have been applied to generate regular 2D meshes from strongly anisotropic data, in order to improve the quality of the final 3D tetrahedral mesh. Furthermore, we also use geostatistics to predict geotechnical parameters from local surveys and estimate their distribution on the whole domain: in this way the 3D model will include relevant geological features of the deposit and allow extrapolating different geotechnical information with associated uncertainty. The volume characterization and its 3D inspection will support geological analysis and planning of future engineering activities. The developed methodology has been tested on two real case studies. • We implemented a complete pipeline for the 3D modeling of stratified deposits • Our research action follows the surface-based approach • We combine computer graphics methodologies with geophysics and geostatistics [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. A mobile game for enhancing Tourism and Cultural Heritage.
- Author
-
Rallis, Ioannis, Kopsiaftis, George, Kalisperakis, Ilias, Stentoumis, Christos, Koutsomitsos, Dimitris, and Riga, Vivian
- Subjects
HERITAGE tourism ,SIMULATION games ,COMPUTER graphics ,COMPUTER engineering ,IMAGE processing ,MOBILE games ,TOURISM websites - Abstract
This paper briefly describes the overall concept of "TRAVEL TYCOON GREECE" (TTGR), a novel business simulation game which aims to simulate realistically a complete tourism experience. The latest image processing and computer graphics technologies were utilized to create accurate 3D backgrounds with different levels of detail, which were incorporated in the game engine and serve as a realistic and accurate terrain allowing the user to navigate in selected historical and touristic areas of Greece. A series of real-world scenarios representing multiple components of the tourism section were designed primarily for entertainment and marketing purposes. In order to motivate users to participate in the game or remain active, an incorporated ticketing platform allows users to win offers such as touristic products and services. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Surface approximation using GPU-based localized fourier transform.
- Author
-
MOUSA, Mohamed H. and HUSSEIN, Mohamed K.
- Subjects
FOURIER transforms ,MATHEMATICAL errors ,SURFACE reconstruction ,IMPLICIT functions ,GEOMETRIC modeling - Abstract
The process of surface reconstruction has received considerable interest from researchers in recent years. Surface reconstruction plays a major role in many applications, such as visualization, geometric modeling and multiresolution analysis. In this paper, we present an approach that approximates a surface from a set of oriented points. Our algorithm combines the implicit surface and frequency-based frameworks to convert the indicator function of the surface into an implicit function from which we can extract the required surface. In contrast to traditional frequency-based approaches, our approach avoids voxelization of the input points and calculates the Fourier coefficients directly from the surface, which reduces the amount of memory required to settle the voxel grid and eliminates the mathematical errors corresponding to this voxelization. In addition, we exploit the recent advances of GPUs embedded in graphics cards to accelerate the calculation of the Fourier coefficients. Finally, some examples are given to demonstrate the validity of the proposed technique. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Three approaches on estimating geometric sensitivities in radiative transfer with Monte Carlo.
- Author
-
He, Zili, Lapeyre, Paule, Blanco, Stephane, d'Eon, Eugene, Eibner, Simon, El Hafi, Mouna, Fournier, Richard, and Roger, Maxime
- Subjects
- *
MONTE Carlo method , *STATISTICAL physics , *RADIATIVE transfer , *COMPUTER graphics , *TRANSPORT equation - Abstract
The Monte Carlo method, renowned for its ability to handle the spectral and geometric complexities of 3D radiative transfer, is extensively utilized across various fields, including concentrated solar power design, atmospheric science, and computer graphics. The success of this method also extends to the estimation of sensitivity—the derivative of an observable with respect to a given system parameter, which is, however, particularly challenging when these parameters involve geometric deformation. Bridging statistical physics and computer graphics, distinct methodologies have emerged within these fields for estimating geometric sensitivity, each employing unique terminologies and mathematical frameworks, leading to seemingly disparate approaches. In this paper, we review the three main approaches to sensitivity estimation: (1) Expectation Differentiation, which employs a vectorized Monte Carlo algorithm to simultaneously estimate the intensity and its sensitivity; (2) Differentiable Rendering, predominantly used in computer graphics and applied in numerous contexts; (3) Transport Model for Sensitivity, which conceptualizes sensitivity as a physical quantity with its own transport equations and boundary conditions, thereby facilitating engineering and physics analyses. We aim to enhance readers' ability to tackle sensitivity-related challenges by providing a comparative understanding of these three perspectives. We achieve this through a simplified one-dimensional radiative transfer case study, offering an accessible platform for comparing and classifying these approaches based on their theoretical underpinnings and practical application in Monte Carlo algorithms. • Introduces a classification of sensitivity estimation techniques into three distinct approaches, bridging concepts from statistical physics and computer graphics. • Employs intuitively an one-dimensional radiative transfer case study to compare these sensitivity estimation methods. • Demonstrats these methods' practical and theoretical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Enhancing 3D reconstruction of textureless indoor scenes with IndoReal multi-view stereo (MVS).
- Author
-
Wang, Tao and Gan, Vincent J.L.
- Subjects
- *
DIGITAL twins , *BUILT environment , *DEPTH perception , *COMPUTER vision , *COMPUTER graphics - Abstract
3D reconstruction plays a pivotal role in capturing the built environment's object shapes and appearances for diverse smart applications, such as indoor navigation and geometric digital twinning. Despite its significance, traditional Multi-View Stereo (MVS) techniques are ineffective in indoor environments, characterised by textureless walls, illumination variation, and other nuanced phenomena. Moreover, current learning-based MVS pipelines are often developed without considering indoor attributes and rely on costly ground truth data for performance optimisation. This paper presents the "IndoReal-MVS" dataset, a rich indoor-centric compilation reflecting real-world phenomena through advanced computer graphics. It also introduces unsupervised "IndoorMatchNet", synergising Feature Pyramid Network (FPN) and Pyramid Flowformer (PFF) for encoding complex indoor geometries. The pipeline proposes Multi-Scale Feature loss, Superpixel-based Normal Consistency and Depth Smoothness losses, designed for indoor geometric characteristics. Experiments showcase a 192% relative improvement over the baseline model at stringent error thresholds, advancing indoor 3D reconstruction tasks. • Synthesise real-world phenomena with IndoReal-MVS dataset for MVS training. • Enhance depth perception for indoor scenes with Twin-FPN, merging FPN and PFF. • Improve textureless area handling with Multi-Scale Feature Loss using semantic comparison. • Boost depth precision by integrating Superpixel-based Normal and Smoothness Loss. • Showcase IndoorMatchNet's improvements over baselines, highlighting the synergy of new methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. QR decomposition of dual matrices and its application.
- Author
-
Xu, Renjie, Wei, Tong, Wei, Yimin, and Xie, Pengpeng
- Subjects
- *
MATRIX decomposition , *KINEMATICS , *SYLVESTER matrix equations , *MATRICES (Mathematics) , *COMPUTER graphics - Abstract
Dual number matrix decompositions have played an important role in fields such as kinematics and computer graphics in recent years. In this paper, we present a QR decomposition algorithm for dual number matrices. When dealing with large-scale problems, we present the thin QR decomposition of dual number matrices, along with its algorithm with column pivoting. In numerical experiments, we discuss the suitability of different QR algorithms when confronted with various large-scale dual matrices, providing their respective domains of applicability. Finally, we employ the QR decomposition of dual matrices to compute the DMPGI, attaining results of higher precision. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Editing mesh sequences with varying connectivity.
- Author
-
Hácha, Filip, Dvořák, Jan, Káčereková, Zuzana, and Váša, Libor
- Subjects
- *
COMPUTER-generated imagery , *DEFORMATION of surfaces , *COMPUTER graphics , *QUATERNIONS , *TOPOLOGY - Abstract
Time-varying connectivity of triangle mesh sequences leads to substantial difficulties in their processing. Unlike editing sequences with constant connectivity, editing sequences with varying connectivity requires addressing the problem of temporal correspondence between the frames of the sequence. We present a method for time-consistent editing of triangle mesh sequences with varying connectivity using sparse temporal correspondence, which can be obtained using existing methods. Our method includes a deformation model based on the usage of the sparse temporal correspondence, which is suitable for the temporal propagation of user-specified deformations of the edited surface with respect to the shape and true topology of the surface while preserving the individual connectivity of each frame. Since there is no other method capable of comparable types of editing on time-varying meshes, we compare our method and the proposed deformation model with a baseline approach and demonstrate the benefits of our framework. [Display omitted] • This paper presents a method for editing mesh sequences with varying connectivity. • The proposed method exploits sparse temporal correspondences. • Dual quaternions are used to represent the transformations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. 3D shape descriptor design based on HKS and persistent homology with stability analysis.
- Author
-
He, Zitong, Zhuo, Peisheng, Lin, Hongwei, and Dai, Junfei
- Subjects
- *
COMPUTER-aided design , *COMPUTER graphics - Abstract
In recent years, with the rapid development of the computer aided design and computer graphics, a large number of 3D models have emerged, making it a challenge to quickly find models of interest. As a concise and informative representation of 3D models, shape descriptors are a key factor in achieving effective retrieval. In this paper, we propose a novel global descriptor for 3D models that incorporates both geometric and topological information. We refer to this descriptor as the persistent heat kernel signature descriptor (PHKS). Constructed by concatenating our isometry-invariant geometric descriptor with topological descriptor, PHKS possesses high recognition ability, while remaining insensitive to noise and can be efficiently calculated. Retrieval experiments of 3D models on the benchmark datasets show considerable performance gains of the proposed method compared to other descriptors based on HKS and advanced topological descriptors. • A descriptor incorporating both geometric and topological information is proposed. • The stability of the proposed descriptor is proved. • The proposed descriptor outperforms state-of-the-art descriptors in shape retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Real-time collision detection between general SDFs.
- Author
-
Liu, Pengfei, Zhang, Yuqing, Wang, He, Yip, Milo K., Liu, Elvis S., and Jin, Xiaogang
- Subjects
- *
ANALYTIC functions , *VIRTUAL reality , *APPLICATION software , *COMPUTER graphics , *TEST methods - Abstract
Signed Distance Fields (SDFs) have found widespread utility in collision detection applications due to their superior query efficiency and ability to represent continuous geometries. However, little attention has been paid to calculating the intersection of two arbitrary SDFs. In this paper, we propose a novel, accurate, and real-time approach for SDF-based collision detection between two solids, both represented as SDFs. Our primary strategy entails using interval calculations and the SDF gradient to guide the search for intersection points within the geometry. For arbitrary objects, we take inspiration from existing collision detection pipelines and segment the two SDFs into multiple parts with bounding volumes. Once potential collisions between two parts are identified, our method quickly computes comprehensive intersection information such as penetration depth, contact points, and contact normals. Our method is general in that it accepts both continuous and discrete SDF representations. Experiment results show that our method can detect collisions in high-precision models in real time, highlighting its potential for a wide range of applications in computer graphics and virtual reality. • The first real-time and accurate general SDF-SDF collision detection method. • A novel method for testing the intersection of analytic distance functions. • An accurate method for estimating contact information for SDF-SDF collision response stages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Identifying knowledge evolution in computer science from the perspective of academic genealogy.
- Author
-
Fu, Zhongmeng, Cao, Yuan, and Zhao, Yong
- Subjects
MENTORING ,COMPUTER science ,ARTIFICIAL intelligence ,SOFTWARE engineering ,COMPUTER graphics ,TRADITIONAL knowledge ,INFORMATION sharing - Abstract
Academic genealogy (AG) provides valuable insights into the transmission of knowledge from mentors to mentees, revealing the evolution of knowledge within the academic community. This study explores the intricate dynamics of knowledge evolution within academic genealogies, utilizing on a dataset comprising 16,852 computer science researchers, 613,277 papers, and 11,988 mentorship relationships. By focusing on small-scale knowledge units, our analysis aims to uncover patterns of knowledge inheritance and mutation across different subfields of computer science and highlights several aspects of knowledge evolution in computer science. Firstly, computer science is characterized by strong mentorship ties, indicating the significance of knowledge transmission within the field. Additionally, there is a mix of foundational and developing areas, suggesting a field that is growing and diversifying rather than declining, as indicated by linear regression outcomes. Secondly, our research reveals a surge in collaborative knowledge exchange in computer science since 2000, with fields such as Computer-Communication Networks and Software Engineering leading in terms of output and impact. Furthermore, areas like Computer Graphics and Artificial Intelligence stand out for their depth and novelty. Thirdly, we categorize researchers into three types: roots, branches, and leaves, reflecting their role in knowledge transmission. Branch researchers tend to innovate, while leaf researchers show a combination of traditional knowledge uptake and new contributions, illustrating the dynamic flow of ideas within the field. Future research endeavors are encouraged to embrace larger datasets and further fortify our understanding of the topic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A content-oriented no-reference perceptual video quality assessment method for computer graphics animation videos.
- Author
-
Xian, Weizhi, Zhou, Mingliang, Fang, Bin, and Kwong, Sam
- Subjects
- *
COMPUTER-generated imagery , *COMPUTER graphics , *CONVOLUTIONAL neural networks , *VISUAL perception , *VIDEOS , *USER-generated content , *VIDEO compression - Abstract
In this paper, we propose a content-oriented no-reference (NR) perceptual video quality assessment (VQA) method for computer graphics (CG) animation videos. First, we extract features in terms of spatiotemporal information and its visual perception from the videos as inputs of our proposed artificial neural network-based VQA model. Second, to facilitate the video quality evaluation, we apply a convolutional neural network (CNN) in the VQA model to generate weight factors for the input features adaptively according to the different types of CG content in videos. Third, we build a subjective CG video quality database for validation of VQA metrics. Experiments demonstrated that our method achieved superior performance in terms of evaluating the quality of CG animation videos. Both the code and proposed database are publicly available at https://github.com/WeizhiXian/CGVQA. The corresponding newly established database is available at https:// pan.baidu.com/s/1_P2ZNrLzJwZfG6xa6tKnDQ (password: cgvq). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Humans are easily fooled by digital images.
- Author
-
Schetinger, Victor, Oliveira, Manuel M., da Silva, Roberto, and Carvalho, Tiago J.
- Subjects
- *
DIGITAL image processing , *DIGITIZATION , *ELECTRONIC data processing , *FORENSIC sciences , *COMPUTER graphics - Abstract
Digital images are everywhere, from social media to news and scientific papers. This paper describes an extensive user study to evaluate the ability of an average individual to spot edited images. By design, our study avoids lucky guesses. After observing an image, subjects were asked if it is authentic or not. Whenever a subject indicated that an image has been altered, (s)he had to provide evidence to support the answer by pointing at the suspected region in the image. We collected 17,208 individual answers from 393 volunteers, using 177 images selected from public forensic databases. Our results indicate that the average individual is not good at distinguishing original from edited images, answering correctly on 58% of all images, and only identifying the modified ones 46.5% of the time. This performance is superior to random guessing, but poor compared to results achieved by computational techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
37. General discriminative optimization for point set registration.
- Author
-
Zhao, Yan, Tang, Wen, Feng, Jun, Wan, Taoruan, and Xi, Long
- Subjects
- *
COMPUTER vision , *RECORDING & registration , *POINT set theory , *COMPUTER graphics , *MATHEMATICAL optimization , *MACHINE learning , *HESSIAN matrices - Abstract
Point set registration has been actively studied in computer vision and graphics. Optimization algorithms are at the core of solving registration problems. Traditional optimization approaches are mainly based on the gradient of objective functions. The derivation of objective functions makes it challenging to find optimal solutions for complex optimization models, especially for those applications where accuracy is critical. Learning-based optimization is a novel approach to address this problem, which learns the gradient direction from datasets. However, many learning-based optimization algorithms learn gradient directions via a single feature extracted from the dataset, which will cause the updating direction to be vulnerable to perturbations around the data, thus falling into a bad stationary point. This paper proposes the General Discriminative Optimization (GDO) method that updates a gradient path automatically through the trade-off among contributions of different features on updating gradients. We illustrate the benefits of GDO with tasks of 3D point set registrations and show that GDO outperforms the state-of-the-art registration methods in terms of accuracy and robustness to perturbations. [Display omitted] • Cast the point sets registration as a learning-based optimization problem. • Utilize features of point sets to update gradients without the Hessian matrix. • Collaborate the different features to reduce the influence of perturbations. • Outperform the other advanced registration methods on accuracy and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. The relaxed implicit randomized algebraic reconstruction technique for curve and surface reconstruction.
- Author
-
Wang, Huidi, Yuan, Lele, Xiong, Jing-Jing, and Mao, Jun
- Subjects
- *
SURFACE reconstruction , *COMPUTER graphics , *APPLICATION software , *CURVES - Abstract
Implicit curve and surface reconstruction has wide applications in computer graphics, which attracts great attention of many researchers. In this paper, we propose the relaxed implicit randomized algebraic reconstruction technique (RIR-ART) for curve and surface reconstruction. The RIR-ART method generates a sequence of curves and surfaces by adjusting the control coefficients with the relaxation parameters iteratively. It is proved that the sequence of curves and surfaces generated by the RIR-ART method with the suitable relaxation parameters converge to the least-norm results in terms of the mean square error. The RIR-ART method can eliminate the extra zero-level sets effectively during its iterative process, and converge faster than the implicit progressive-iterative approximation method (Hamza et al. 2020). Numerical examples present the efficiency and effectiveness of the proposed method. • The RIR-ART method converges in terms of the mean square error. • The extra zero-level sets are eliminated by the RIR-ART method effectively. • The RIR-ART method converges faster than the I-PIA method for the reconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Feature-based clustered geometry for interpolated Ray-casting.
- Author
-
González García, Francisco, Martin, Ignacio, and Patow, Gustavo
- Subjects
- *
TEXTURE mapping , *INFORMATION theory , *COMPUTER graphics , *GEOMETRY , *DATA structures - Abstract
Acceleration techniques for Rendering in general, and Ray-Casting in particular, have been the subject of much research in Computer Graphics. Most efforts have been focused on new data structures for efficient ray/scene traversal and intersection. In this paper, we propose an acceleration technique that approximates rendering and that is built around a new feature-based clustering approach. The technique starts preprocessing the scene by grouping elements according to their features using a set of channels based on an information theory-based approach. Then, at run-time, a rendering strategy uses that clustering information to reconstruct the final image, by deciding which areas could take advantage of the coherence in the features and thus, could be interpolated; and which areas require more involved calculations. This process starts with a low-resolution render that is iteratively refined up to the desired resolution by reusing previously computed pixels. Our experimental results show a significant speedup of an order of magnitude, depending on the complexity of the per-pixel calculations, the screen size of the objects, and the number of clusters. Rendering quality and speed directly depend on the number of clusters and the number of steps performed during the reconstruction procedure, and both can easily be set by the user. Our findings show that feature-based clustering can significantly impact rendering speed if samples are chosen to enable interpolation of smooth regions. Our technique, thus, accelerates a range of popular and costly techniques, ranging from texture mapping up to complex ambient occlusion, soft and hard shadow calculations, and it can even be used in conjunction with more traditional acceleration methods. [Display omitted] • Mesh-clustering based Information Theory tools to exploit feature similarity. • User-definable parameters, decoupling them from the actual rendering process. • Multi-pass rendering based on reuse and interpolation of previous results. • Controlling mechanism to guarantee Ray-casting as a lower bound to rendering speed. • Our technique can accommodate both static and animated scenes as well. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Spatially and color consistent environment lighting estimation using deep neural networks for mixed reality.
- Author
-
Dorta Marques, Bruno Augusto, Gonzalez Clua, Esteban Walter, Montenegro, Anselmo Antunes, and Nader Vasconcelos, Cristina
- Subjects
- *
MIXED reality , *SPHERICAL harmonics , *COMPUTER graphics , *ESTIMATION theory , *MACHINE learning , *PROBLEM solving - Abstract
The representation of consistent mixed reality (XR) environments requires adequate real and virtual illumination composition in real-time. Estimating the lighting of a real scenario is still a challenge. Due to the ill-posed nature of the problem, classical inverse-rendering techniques tackle the problem for simple lighting setups. However, those assumptions do not satisfy the current state-of-art in computer graphics and XR applications. While many recent works solve the problem using machine learning techniques to estimate the environment light and scene's materials, most of them are limited to geometry or previous knowledge. This paper presents a CNN-based model to estimate complex lighting for mixed reality environments with no previous information about the scene. We model the environment illumination using a set of spherical harmonics (SH) environment lighting, capable of efficiently represent area lighting. We propose a new CNN architecture that inputs an RGB image and recognizes, in real-time, the environment lighting. Unlike previous CNN-based lighting estimation methods, we propose using a highly optimized deep neural network architecture, with a reduced number of parameters, that can learn high complex lighting scenarios from real-world high-dynamic-range (HDR) environment images. We show in the experiments that the CNN architecture can predict the environment lighting with an average mean squared error (MSE) of 7.85 × 10−4 when comparing SH lighting coefficients. We validate our model in a variety of mixed reality scenarios. Furthermore, we present qualitative results comparing relights of real-world scenes. [Display omitted] • Automatic end-to-end method to estimate the environment lighting in XR applications. • A CNN architecture that learns a latent-space of the environment lighting. • A methodology to generate egocentric mixed-reality-views from HDR panoramas. • Real-time lighting estimation that does not make assumptions about the XR scene. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Improved undamaged-to-damaged acceleration response translation for Structural Health Monitoring.
- Author
-
Luleci, Furkan, Avci, Onur, and Catbas, F. Necati
- Subjects
- *
STRUCTURAL health monitoring , *SINGULAR value decomposition , *COMPUTER vision , *GENERATIVE adversarial networks , *COMPUTER graphics - Abstract
Unpaired image-to-image translation is a popular research topic in computer vision and graphics. Recently, the authors of this paper took a similar approach and translated the domain of acceleration responses collected from a steel grandstand structure. In doing so, the undamaged response is translated to damaged, and the damaged response to undamaged. For that, a variant of the CycleGAN model is trained with undamaged (bolt tightened at the joint) and damaged (bolt loosened at the joint) responses from a single joint in the structure. However, the success of the domain translation on the test joints was very limited. Thus, this study investigates improvements to the model and the training procedure for further accuracy. First, the model in this study gets a more extensive training procedure to increase the model's domain knowledge. During the training, a novel signal coherence-based index is considered to account for the similarity of frequency domains of the original and the translated data. Second, the Gated Linear Units, skip-connections, and Mish activation function are used to minimize the gradient loss and to learn the broader features in the data. Third, the total loss function of the generator is supplemented with a new frequency domain-based loss to better capture the frequency content of the data. Fourth, random decaying noise is added to the inputs for better generalization in the test data. Last, the model is evaluated using modal parameters such as natural frequencies, damping ratios, and singular value decomposition of the estimated spectral densities. The improvements presented in this study demonstrate a successful domain translation of acceleration responses for the tested joints compared to past study. The findings of this paper show that domain translation can be advantageous in Structural Health Monitoring applications, such as having access to the damaged or undamaged response of the structure while it is in pristine or unhealthy condition. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Optimizing vehicle dynamics co-simulation performance by introducing mesoscopic traffic simulation.
- Author
-
Varga, Balázs, Doba, Dániel, and Tettamanti, Tamás
- Subjects
- *
TRAFFIC assignment , *MICRO air vehicles , *COMPUTER graphics , *VEHICLE models , *TRAFFIC lanes , *TRAFFIC flow , *RESIDENTIAL areas - Abstract
Microscopic traffic simulation is often used in conjunction with vehicle dynamics simulation to test cooperative or perception-based driver assistance functions. On the other hand, visualization and the interaction of a large number of swarm vehicles is computationally burdensome, limiting the size of scenarios. To remedy this problem, this paper introduces a mesoscopic traffic model, namely the extension of the shockwave profile model to road networks, that handles traffic as continuous flows of vehicles. Adopting the idea of level of detail from computer graphics, the modeling of swarm vehicles is carried out in less detail farther away from the EGO vehicle. These levels are defined using the classical (macroscopic, mesoscopic, and microscopic) categorization of traffic modeling. The macroscopic traffic model is responsible for the traffic demand and traffic assignment. The proposed mesoscopic model is capable of capturing the fluctuating nature of traffic on lane level. Closer to the EGO vehicle microscopic traffic simulation is employed while the EGO vehicle is modeled in full-detail including vehicle dynamics too. The 3D rendering of the simulation is performed by the vehicle dynamics simulator. The challenge in the proposed methodology is transitioning between the mesoscopic and the microscopic models, i.e., selecting the boundary and spawning/destroying vehicle agents. The paper addresses this challenge with a linear (w.r.t. the vehicle number) time complexity algorithm. Practically, a dynamic downscaling of the microscopic simulation to mesoscopic level is realized outside the vicinity of the EGO vehicle. The proposed methodology is generic, and can be adapted to most existing vehicle dynamics and microscopic traffic simulator software. The solution is tested with SUMO as microsimulation and Carla as vehicle dynamics simulation through a simple path following test case in two scenarios: a congested residential area and a complex scenario with both a highway section and an urban area. Simulation results suggest that the simulation performance could be improved by 200–500% while retaining modeling accuracy, compared to the case when only microscopic simulation is used. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Spatio-temporal filtered motion DAGs for path-tracing.
- Author
-
Martinek, Magdalena, Thiemann, Philip, and Stamminger, Marc
- Subjects
- *
RAY tracing , *DATA structures , *COMPUTATIONAL geometry , *COMPUTER graphics , *DESIGN techniques , *MOTION - Abstract
[Display omitted] Motion Blur is an important effect of photo-realistic rendering. Distribution ray tracing can simulate motion blur very well by integrating light, both over the spatial and the temporal domain. However, increasing the problem by the temporal dimension entails many challenges, particularly in cinematic multi-bounce path tracing of complex scenes, where heavy-weight geometry with complex lighting and even offscreen elements contribute to the final image. In this paper, we propose the Motion DAG (Directed Acyclic Graph), a novel data structure that filters an entire animation sequence of an object, both in the spatial and temporal domain. These Motion DAGs interleave a temporal interval binary tree for filtering time consecutive data and a sparse voxel octree (SVO), which simplifies spatially nearby data. Motion DAGs are generated in a pre-process and can be easily integrated in a conventional physically based path tracer. Our technique is designed to target motion blur of small objects, where coarse representations are sufficient. Specifically, in this scenario our results show that it is possible to significantly reduce both, memory consumption and render time. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. On visibility and empty-region graphs.
- Author
-
Katz, Sagi and Tal, Ayellet
- Subjects
- *
COMPUTER graphics , *GEOMETRIC modeling , *COMPUTATIONAL geometry , *GEOMETRIC vertices , *INTERSECTION theory - Abstract
Empty-region graphs are well-studied in Computer Graphics, Geometric Modeling, Computational Geometry, as well as in Robotics and Computer Vision. The vertices of these graphs are points in space, and two vertices are connected by an arc if there exists an empty region of a certain shape and size between them. In most of the graphs discussed in the literature, the empty region is assumed to be a circle or the union/intersection of circles. In this paper we propose a new type of empty-region graphs—the γ -visibility graph. This graph can accommodate a variety of shapes of empty regions and may be defined in any dimension. Interestingly, we will show that commonly-used shapes are a special case of our graph. In this sense, our graph generalizes some empty-region graphs. Though this paper is mostly theoretical, it may have practical implication—the numerous applications that make use of empty-region graphs would be able to select the best shape that suits the problem at hand. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Enriching programming content semantics: An evaluation of visual analytics approach.
- Author
-
Hsiao, I-Han and Lin, Yi-Ling
- Subjects
- *
ABSTRACTING & indexing services , *COMPUTER graphics , *LEARNING strategies , *PROGRAMMING languages , *SEMANTICS - Abstract
In this work, we present an intelligent classroom orchestration technology to capture semantic learning analytics from paper-based programming exams. We design and study an innovative visual analytics system, EduAnalysis, to support programming content semantics extraction and analysis. EduAnalysis indexes each programming exam question to a set of concepts based on the ontology. It utilizes automatic indexing algorithm and interactive visualization interfaces to establish the concepts and questions associations. We collect the indexing ground truths of the targeted set from teachers and experts from the crowd. We found that the system significantly extracted more and diverse concepts from exams and achieved high coherence within exam. We also discovered that indexing effectiveness was especially prevalent for complex content. Overall, the semantic enriching approach for programming problems reveals systematic learning analytics from the paper exams. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
46. A systematic approach and tool support for GSN-based safety case assessment.
- Author
-
Luo, Yaping, van den Brand, Mark, Li, Zhuoao, and Saberi, Arash Khabbaz
- Subjects
- *
COMPUTER security , *SYSTEMS engineering , *COMPUTER graphics , *INDUSTRIAL applications , *PROTOTYPES - Abstract
Context . In safety-critical domains, safety cases are widely used to demonstrate the safety of systems. A safety case is an argumentation for showing confidence in the claimed safety assurance of a system, which should be comprehensible and well-structured. Typically, safety cases can be represented in plain text or graphic way, such as Goal Structuring Notation (GSN). After safety cases are developed, assessment of safety cases needs to be performed to check the quality of them. Besides, different roles are involved during this process: safety case developers and safety case assessors. Objective . During the safety case assessment process, safety case assessors are required to evaluate the validity of a safety case and discuss their judgement with safety case developers. Currently, the outcome of a safety case assessment and the way of providing judgement are not systematically supported, which may cause inconsistent outcomes and wrong judgements. Therefore a systematic process of safety case assessment is required. Moreover, to support safety case assessment in an efficient and effective way, tool support is needed. Recently, a number of safety case editors are developed to support safety case development with the GSN. These editors support the development and management of safety cases. However, only a few editors offer limited functionalities for safety case assessment which is one of the crucial phases of the safety assurance process. This motivates us to develop a tool to support safety case assessment. Method . In this paper, we first identify two research questions. Resulting in two directions for further study have been identified: formalising the safety case assessment process and developing safety case tooling. First, we carried out a study on the state of art on safety case assessment and safety case tooling. Based on our findings, we formalize the safety assessment process by identifying the typical steps in safety case assessment. This assessment process can guide assessors to assess a safety case from a general level to a detailed level and provide reliable and understandable feedback to developers. Finally two industrial case studies are carried out to validate the proposed assessment process. Results . To support the proposed process, a prototype tool for safety case assessment was developed. A number of required features are implemented in the prototype tooling, among other it provides a complete and self-contained evaluation system to measure the quality of the safety case. Moreover, the case study validations show potential for facilitating safety assessment in practice. Conclusions . In this paper, two research questions are identified and the solutions of them are discussed. Then we propose a systematic approach for safety case assessment. For demonstration, a tool support is also developed. For validation two industrial case studies have been carried out to show the effectiveness of the proposed process. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
47. Extending Perfect Spatial Hashing to Index Tuple-based Graphs Representing Super Carbon Nanotubes.
- Author
-
Burger, Michael, Nguyen, Giang Nam, and Bischof, Christian
- Subjects
REPRESENTATIONS of graphs ,CARBON nanotubes ,COMPUTER graphics ,PATHS & cycles in graph theory ,CENTRAL processing units ,COMPUTER algorithms - Abstract
In this paper, we demonstrate how to extend perfect spatial hashing (PSH) in order to hash multidimensional scientific data. As a use case we employ the problem domain of indexing nodes in a graph that represents Super Carbon Nanotubes (SCNTs). The goal of PSH is to hash multidimensional data without collisions. Since PSH results from the research on computer graphics, its principles and methods have only been tested on 2- and 3-dimensional problems. In our case, we need to hash up to 28 dimensions. In contrast to the original applications of PSH, we do not focus on GPUs as target hardware but on an efficient CPU implementation. Thus, this paper highlights the extensions to the original algorithm to make it suitable for higher dimensions. Comparing the compression and performance results of the new PSH based graphs and a structure-tailored custom data structure in our parallelized SCNT simulation software, we find that PSH in some cases achieves better compression by a factor of 1.7 while only increasing the total runtime by several percent. In particular, after our extension, PSH can also be employed to index sparse multidimensional scientific data from other domains where PSH can avoid additional index-structures like KD- or R-trees. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. Interactive inspection of complex multi-object industrial assemblies.
- Author
-
Argudo, O., Besora, I., Brunet, P., Creus, C., Hermosilla, P., Navazo, I., and Vinacua, À.
- Subjects
- *
VIRTUAL prototypes , *NAVAL architecture , *COMPUTER graphics , *INDUSTRIAL applications , *RELIEF models , *PARTICIPATORY design - Abstract
The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution–based on the analysis of several existing view-dependent visualization schemes–uses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. Unsupervised stereoscopic image retargeting via view synthesis and stereo cycle consistency losses.
- Author
-
Fan, Xiaoting, Lei, Jianjun, Liang, Jie, Fang, Yuming, Cao, Xiaochun, and Ling, Nam
- Subjects
- *
COMPUTER graphics , *STEREO vision (Computer science) , *SURETYSHIP & guaranty - Abstract
Stereoscopic image retargeting aims to manipulate the stereoscopic images to fit various devices with different resolutions and prescribed aspect ratios. With the development of various types of three-dimensional (3D) displays, stereoscopic image retargeting becomes increasingly popular in the field of computer graphics. In this paper, we propose an unsupervised stereoscopic image retargeting network (USIR-Net) to address the problem of stereoscopic image retargeting without label information. By exploring the inter-view correlation and disparity relationship of stereoscopic images, two unsupervised losses are developed to guide the learning of stereoscopic image retargeting model. First, in view of the inter-view correlation, a view synthesis loss is proposed to guarantee the generation of high-quality stereoscopic images with accurate inter-view relationship. Second, by exploiting the consistency of stereoscopic images before and after the retargeting, a stereo cycle consistency loss, which consists of a content consistency term and a disparity consistency term, is developed to preserve the structure information and prevent binocular disparity inconsistency. Quantitative and qualitative experimental results demonstrate that the proposed method achieves superior performance compared with state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Knowledge-Based Management of Virtual Training Scenarios.
- Author
-
Flotyński, Jakub, Walczak, Krzysztof, Sobociński, Paweł, and Gałązkiewicz, Adam
- Subjects
COMPUTER graphics ,SEMANTIC Web ,KNOWLEDGE representation (Information theory) ,COMPUTER programming ,INFORMATION technology ,ONTOLOGIES (Information retrieval) ,VIRTUAL reality - Abstract
Virtual reality (VR) gains increasing attention as a method of implementing training systems in different domains, in particular, when real training is potentially dangerous for the trainees or the environment, or requires expensive equipment. The essential element of professional training is domain-specific knowledge, which can be represented using the semantic web approach. It enables reasoning as well as complex queries against the representation of training scenarios, which can be valuable for teaching purposes. However, the available methods and tools for creating VR training systems do not use semantic knowledge representation. Currently, the creation, modification, and management of training scenarios require skills in programming and computer graphics. Hence, they are unavailable to domain experts without expertise in IT. In this paper, we propose an ontology-based representation and a method of modeling VR training scenarios. In our approach, trainees' activities, potential mistakes as well as equipment and its possible errors are represented using domain knowledge understandable to domain experts. We illustrate the approach by modeling VR training scenarios for electrical operators of high-voltage installations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.