19 results on '"Ken Anjyo"'
Search Results
2. Optimal and interactive keyframe selection for motion capture
- Author
-
Jaewoo Seo, Ken Anjyo, Richard D. Roberts, Yeongho Seol, and John P. Lewis
- Subjects
Computer science ,Interface (Java) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Motion capture ,motion editing ,Motion (physics) ,lcsh:QA75.5-76.95 ,Computer graphics ,Set (abstract data type) ,Artificial Intelligence ,Selection (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,motion capture ,Computer vision ,Representation (mathematics) ,ComputingMethodologies_COMPUTERGRAPHICS ,dynamic programming ,business.industry ,020207 software engineering ,Animation ,keyframe animation ,Computer Graphics and Computer-Aided Design ,Dynamic programming ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,lcsh:Electronic computers. Computer science ,business - Abstract
Motion capture is increasingly used in games and movies, but often requires editing before it can be used, for many reasons. The motion may need to be adjusted to correctly interact with virtual objects or to fix problems that result from mapping the motion to a character of a different size or, beyond such technical requirements, directors can request stylistic changes. Unfortunately, editing is laborious because of the low-level representation of the data. While existing motion editing methods accomplish modest changes, larger edits can require the artist to “re-animate” the motion by manually selecting a subset of the frames as keyframes. In this paper, we automatically find sets of frames to serve as keyframes for editing the motion. We formulate the problem of selecting an optimal set of keyframes as a shortest-path problem, and solve it efficiently using dynamic programming. We create a new simplified animation by interpolating the found keyframes using a naive curve fitting technique. Our algorithm can simplify motion capture to around 10% of the original number of frames while retaining most of its detail. By simplifying animation with our algorithm, we realize a new approach to motion editing and stylization founded on the time-tested keyframe interface. We present results that show our algorithm outperforms both research algorithms and a leading commercial tool.
- Published
- 2019
- Full Text
- View/download PDF
3. Deformation transfer survey
- Author
-
Rafael Kuffner dos Anjos, Akinobu Maejima, Ken Anjyo, and Richard D. Roberts
- Subjects
FOS: Computer and information sciences ,Engineering drawing ,Artificial Intelligence and Image Processing ,Computer science ,General Engineering ,020207 software engineering ,02 engineering and technology ,Animation ,Deformation (meteorology) ,Reuse ,Computer Graphics and Computer-Aided Design ,Competitive advantage ,GeneralLiterature_MISCELLANEOUS ,Human-Computer Interaction ,Computer Software ,Work (electrical) ,Transfer (computing) ,Retargeting ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Deformation transfer is a type of retargeting method that operates directly on the mesh and, by doing so, enables reuse of animation without setting up character rigs and a mapping between the source and target geometries. Deformation transfer can potentially reduce the costs of animation and give studios a competitive edge when keeping up with the latest computer animation technology. Unfortunately, deformation transfer has limitations and is yet to become standard practice in the industry. This survey starts by introducing Sumner and Popovic’s [18] seminal work and highlights key issues for industry settings. We then review related work in sections, organized by these key issues. After surveying related work, we discuss how their advances open the door to several practical applications of deformation transfer. To conclude, we highlight areas of future work.
- Published
- 2021
4. Interactive Video Completion
- Author
-
Keita Noda, Ken Anjyo, Makoto Okabe, and Yoshinori Dobashi
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence and Image Processing ,Computer science ,business.industry ,Iterative method ,Interactive video ,90699 Electrical and Electronic Engineering not elsewhere classified ,Graphics processing unit ,Optical flow ,020207 software engineering ,02 engineering and technology ,Image-based modeling and rendering ,Computer Graphics and Computer-Aided Design ,Task (project management) ,Term (time) ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,business ,Software - Abstract
We propose an interactive video completion method aiming for practical use in a digital production workplace. The results of earlier automatic solutions often require a considerable amount of manual modifications to make them usable in practice. To reduce such a laborious task, our method offers an efficient editing tool. Our iterative algorithm estimates the flow fields and colors in space-time holes in the video. As in earlier approaches, our algorithm uses an $L^1$L1 data term to estimate flow fields. However, we employ a novel $L^2$L2 data term to estimate temporally coherent color transitions. Our graphics processing unit implementation enables the user to interactively complete a video by drawing holes and immediately removes objects from the video. In addition, our method successfully interpolates sparse modifications initialized by the designer. According to our subjective evaluation, the videos completed with our method look significantly better than those with other state-of-the-art approaches.
- Published
- 2021
- Full Text
- View/download PDF
5. Shading Rig: Dynamic Art-directable Stylised Shading for 3D Characters
- Author
-
Taehyun Rhee, Lohit Petikam, and Ken Anjyo
- Subjects
FOS: Computer and information sciences ,business.product_category ,Cel shading ,Artificial Intelligence and Image Processing ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Animation techniques ,Computer Graphics and Computer-Aided Design ,Popularity ,Cel ,GeneralLiterature_MISCELLANEOUS ,Style (visual arts) ,Computer graphics (images) ,Shading ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Information Systems - Abstract
Despite the popularity of three-dimensional (3D) animation techniques, the style of 2D cel animation is seeing increased use in games and interactive applications. However, conventional 3D toon shading frequently requires manual editing to clean up undesired shadows or add stylistic details based on art direction. This editing is impractical for the frame-by-frame editing in cartoon feature film post-production. For interactive stylised media and games, post-production is unavailable due to real-time constraints, so art-direction must be preserved automatically. For these reasons, artists often resort to mesh and texture edits to mitigate undesired shadows typical of toon shaders. Such edits allow real-time rendering but are limited in resolution, animation quality and lack detail control for stylised shadow design. In our framework, artists build a “shading rig,” a collection of these edits, that allows artists to animate toon shading. Artists pre-animate the shading rig under changing lighting, to dynamically preserve artistic intent in a live application, without manual intervention. We show our method preserves continuous motion and shape interpolation, with fewer keyframes than previous work. Our shading shape interpolation is computationally cheaper than state-of-the-art image interpolation techniques. We achieve these improvements while preserving vector quality rendering, without resorting either to high texture resolution or mesh density.
- Published
- 2021
- Full Text
- View/download PDF
6. Animating pictures of water scenes using video retrieval
- Author
-
Makoto Okabe, Yoshinori Dobashi, and Ken Anjyo
- Subjects
business.industry ,Computer science ,Interactive design ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,020207 software engineering ,02 engineering and technology ,Animation ,Computer Graphics and Computer-Aided Design ,Computer graphics ,Alpha (programming language) ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Computer facial animation ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS ,Texture synthesis - Abstract
We present a system to quickly and easily create an animation of water scenes in a single image. Our method relies on a database of videos of water scenes and video retrieval technique. Given an input image, alpha masks specifying regions of interest, and sketches specifying flow directions, our system first retrieves appropriate video candidates from the database and create the candidate animations for each region of interest as the composite of the input image and the retrieved videos: this process spends less than one minute by taking advantage of parallel distributed processing. Our system then allows the user to interactively control the speed of the desired animation and select the appropriate animation. After selecting the animation for all the regions, the resulting animation is completed. Finally, the user optionally applies a texture synthesis algorithm to recover the appearance of the input image. We demonstrate that our system allows the user to create a variety of animations of water scenes.
- Published
- 2016
- Full Text
- View/download PDF
7. Fluid volume modeling from sparse multi-view images by appearance transfer
- Author
-
Rikio Onai, Yoshinori Dobashi, Ken Anjyo, and Makoto Okabe
- Subjects
business.industry ,Iterative method ,Computer science ,Histogram ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Artificial intelligence ,business ,Computer Graphics and Computer-Aided Design ,ComputingMethodologies_COMPUTERGRAPHICS ,Rendering (computer graphics) - Abstract
We propose a method of three-dimensional (3D) modeling of volumetric fluid phenomena from sparse multi-view images (e.g., only a single-view input or a pair of front- and side-view inputs). The volume determined from such sparse inputs using previous methods appears blurry and unnatural with novel views; however, our method preserves the appearance of novel viewing angles by transferring the appearance information from input images to novel viewing angles. For appearance information, we use histograms of image intensities and steerable coefficients. We formulate the volume modeling as an energy minimization problem with statistical hard constraints, which is solved using an expectation maximization (EM)-like iterative algorithm. Our algorithm begins with a rough estimate of the initial volume modeled from the input images, followed by an iterative process whereby we first render the images of the current volume with novel viewing angles. Then, we modify the rendered images by transferring the appearance information from the input images, and we thereafter model the improved volume based on the modified images. We iterate these operations until the volume converges. We demonstrate our method successfully provides natural-looking volume sequences of fluids (i.e., fire, smoke, explosions, and a water splash) from sparse multi-view videos. To create production-ready fluid animations, we further propose a method of rendering and editing fluids using a commercially available fluid simulator.
- Published
- 2015
- Full Text
- View/download PDF
8. Session details: Simulation for virtual worlds
- Author
-
Ken Anjyo
- Subjects
Multimedia ,Computer science ,Session (computer science) ,Metaverse ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,computer - Published
- 2017
- Full Text
- View/download PDF
9. Lit-Sphere extension for artistic rendering
- Author
-
Ken Anjyo, Hideki Todo, and Shunichi Yokoyama
- Subjects
Deferred shading ,Cel shading ,Computer science ,business.industry ,Artistic rendering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Graphics and Computer-Aided Design ,GeneralLiterature_MISCELLANEOUS ,Non-photorealistic rendering ,Computer graphics ,Computer graphics (images) ,Computer vision ,Rasterisation ,Computer Vision and Pattern Recognition ,Shading ,Artificial intelligence ,Graphics ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The Lit-Sphere model proposed by Sloan et al. (Proceedings of Graphics Interface 2001, pp. 143---150, 2001) is a method for emulating expressive artistic shading styles for 3D scenes. Assuming that artistic shading styles are described by view space normals, this model produces a variety of stylized shading scenes beyond traditional 3D lighting control. However, it is limited to the static lighting case: the shading effect is only dependent on the camera view. In addition, it cannot support small-scale brush stroke styles. In this paper, we propose a scheme to extend the Lit-Sphere model based on light space normals rather than view space normals. Owing to the light space representation, our shading model addresses the issues of the original Lit-Sphere approach, and allows artists to use a light source to obtain dynamic diffuse and specular shading. Then the shading appearance can be refined using stylization effects including highlight shape control, sub-lighting effects, and brush stroke styles. Our algorithms are easy to implement on GPU, so that our system allows interactive shading design.
- Published
- 2013
- Full Text
- View/download PDF
10. Mathematical Basics of Motion and Deformation in Computer Graphics
- Author
-
Hiroyuki Ochiai and Ken Anjyo
- Subjects
Computer science ,Animation ,Deformation (meteorology) ,Computer Graphics and Computer-Aided Design ,Motion (physics) ,Computer Science Applications ,Algebra ,Euler angles ,Computer graphics ,Mathematical theory ,Matrix (mathematics) ,symbols.namesake ,Differential geometry ,Computer graphics (images) ,symbols ,Affine transformation ,Graphics ,Quaternion ,Geometric modeling ,2D computer graphics ,Rigid transformation ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS ,Interpolation - Abstract
This synthesis lecture presents an intuitive introduction to the mathematics of motion and deformation in computer graphics. Starting with familiar concepts in graphics, such as Euler angles, quaternions, and affine transformations, we illustrate that a mathematical theory behind these concepts enables us to develop the techniques for efficient/effective creation of computer animation. This book, therefore, serves as a good guidepost to mathematics (differential geometry and Lie theory) for students of geometric modeling and animation in computer graphics. Experienced developers and researchers will also benefit from this book, since it gives a comprehensive overview of mathematical approaches that are particularly useful in character modeling, deformation, and animation. Table of Contents: Preface / Symbols and Notations / Introduction / Rigid Transformation / Affine Transformation / Exponential and Logarithm of Matrices / 2D Affine Transformation between Two Triangles / Global 2D Shape Interpolation / Parametrizing 3D Positive Affine Transformations / Further Readings / Bibliography / Authors' Biographies
- Published
- 2017
- Full Text
- View/download PDF
11. Spacetime expression cloning for blendshapes
- Author
-
John P. Lewis, Junyong Noh, Yeongho Seol, Jaewoo Seo, Byungkuk Choi, and Ken Anjyo
- Subjects
Spacetime ,business.industry ,Animation ,Computer Graphics and Computer-Aided Design ,Expression (mathematics) ,Position (vector) ,Face (geometry) ,Computer graphics (images) ,Retargeting ,Computer vision ,Artificial intelligence ,Representation (mathematics) ,business ,Computer facial animation ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.
- Published
- 2012
- Full Text
- View/download PDF
12. Non-photorealistic, depth-based image editing
- Author
-
Sunil Hadap, Ken Anjyo, Jorge Lopez-Moreno, Diego Gutierrez, Erik Reinhard, and Jorge Jimenez
- Subjects
Computer science ,business.industry ,Graphics hardware ,General Engineering ,Approximation algorithm ,020207 software engineering ,Image processing ,02 engineering and technology ,Image editing ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Non-photorealistic rendering ,Human-Computer Interaction ,Depth map ,Computer graphics (images) ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,computer - Abstract
Recent works in image editing are opening up new possibilities to manipulate and enhance input images. Within this context, we leverage well-known characteristics of human perception along with a simple depth approximation algorithm to generate non-photorealistic renditions that would be difficult to achieve with existing methods. Once a perceptually plausible depth map is obtained from the input image, we show how simple algorithms yield powerful new depictions of such an image. Additionally, we show how artistic manipulation of depth maps can be used to create novel non-photorealistic versions, for which we provide the user with an intuitive interface. Our real-time implementation on graphics hardware allows the user to efficiently explore artistic possibilities for each image. We show results produced with six different styles proving the versatility of our approach, and validate our assumptions and simplifications by means of a user study.
- Published
- 2011
- Full Text
- View/download PDF
13. Stylized lighting for cartoon shader
- Author
-
Hideki Todo, Ken Anjyo, and Takeo Igarashi
- Subjects
Computer Graphics and Computer-Aided Design ,Software - Published
- 2009
- Full Text
- View/download PDF
14. Directable animation of elastic bodies with point-constraints
- Author
-
Ryo Kondo and Ken Anjyo
- Subjects
Computer Graphics and Computer-Aided Design ,Software - Published
- 2008
- Full Text
- View/download PDF
15. Efficient lip-synch tool for 3D cartoon animation
- Author
-
Shin-ichi Kawamoto, Tatsuo Yotsukura, Ken Anjyo, and Satoshi Nakamura
- Subjects
Computer Graphics and Computer-Aided Design ,Software - Published
- 2008
- Full Text
- View/download PDF
16. Stylized highlights for cartoon rendering and animation
- Author
-
K. Hiramitsu and Ken Anjyo
- Subjects
Stylized fact ,business.product_category ,Computer science ,Animation ,Computer Graphics and Computer-Aided Design ,Cel ,Rendering (computer graphics) ,Computer graphics (images) ,Skeletal animation ,business ,Shader ,Software ,Computer animation ,Computer facial animation - Abstract
We propose a new highlight shader for the 3D objects used in cel animation. Without using a texture-mapping technique, our shader makes highlight shapes and animations in a cartoon style. Our shader makes an initial highlight shape using Blinn's (1977) traditional specular model. It then interactively modifies the initial shape through geometric, stylistic, and Boolean transformations for the highlight until we get our final desired shape. Moreover, once these operations specify highlight shapes for each keyframe, our shader automatically generates the highlight animation. In other words, our shader offers a new definition of highlighting 3D objects for cel animation.
- Published
- 2003
- Full Text
- View/download PDF
17. N-way morphing for 2D Animation
- Author
-
William Baxter, Pascal Barla, Ken Anjyo, OLM Digital, Inc. Research & Development Division (OLM R&D), OLM Digital, Inc., Visualization and manipulation of complex data on wireless mobile devices (IPARLA ), Université Sciences et Technologies - Bordeaux 1 (UB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS), Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Sciences et Technologies - Bordeaux 1, and Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)
- Subjects
2D animation ,matching ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,morphing ,Computer Graphics and Computer-Aided Design ,Software ,interpolation ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
International audience; We present a novel approach to the creation of varied animations from a small set of simple 2D input shapes. Instead of providing a new 2D shape for each keyframe of an animation sequence, we instead interpolate between a few example shapes in a reduced pose-space. Similar approaches have been presented in the past, but were restricted in the types of input or range of deformations allowed. In order to address these limitations, we reformulate the problem as an N-way morphing process on 2D input bitmap or vector graphics. Our formulation includes an N-way mapping technique, an efficient, rigidity preserving non-linear blending function, improved extrapolation, and a novel scattered data interpolation technique to manage the reduced pose-space. The resulting animations are correlated to paths in the reduced pose-space, allowing users to intuitively and interactively control temporal behaviors with simple gestures. We demonstrate our techniques in several example animations.
- Published
- 2009
18. Session details: Non-photorealistic rendering
- Author
-
Ken Anjyo
- Subjects
Computer science ,Computer graphics (images) ,Session (computer science) ,Computer Graphics and Computer-Aided Design ,Non-photorealistic rendering - Published
- 2008
- Full Text
- View/download PDF
19. 'Tour into the picture' as a non-photorealistic animation
- Author
-
Ken Anjyo
- Subjects
General Computer Science ,Computer science ,Computer graphics (images) ,Animation ,Computer Graphics and Computer-Aided Design - Published
- 1999
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.