14 results on '"Multimedia content creation"'
Search Results
2. Concept Vectors for Zero-Shot Video Generation
- Author
-
Dani, Riya Jinesh and Dani, Riya Jinesh
- Abstract
Zero-shot video generation involves generating videos of concepts (action classes) that are not seen in the training phase. Even though the research community has explored conditional video generation for long high-resolution videos, zero-shot video remains a fairly unexplored and challenging task. Most recent works can generate videos for action-object or motion-content pairs, where both the object (content) and action (motion) are observed separately during training, yet results often lack spatial consistency between foreground and background and cannot generalize to complex scenes with multiple objects or actions. In this work, we propose Concept2Vid that generates zero-shot videos for classes that are completely unseen during training. In contrast to prior work, our model is not limited to a predefined fixed set of class-level attributes, but rather utilizes semantic information from multiple videos of the same topic to generate samples from novel classes. We evaluate qualitatively and quantitatively on the Kinetics400 and UCF101 datasets, demonstrating the effectiveness of our proposed model.
- Published
- 2022
3. The Potential of a Text-Based Interface as a Design Medium: An Experiment in a Computer Animation Environment.
- Author
-
SANGWON LEE and JIN YAN
- Subjects
- *
GRAPHICAL user interfaces , *COMPUTER-generated imagery , *COMPUTER-aided design , *USER interfaces - Abstract
Since the birth of the concept of direct manipulation, the graphical user interface has been the dominant means of controlling digital objects. In this research, we hypothesize that the benefits of a text-based interface involve multiple tradeoffs, and we explore the potential of text as a medium of design from three perspectives: (i) the perceived level of control of the designed object, (ii) a tool for realizing creative ideas and (iii) an effective form for a highly learnable user interface. Our experiment in a computer animation environment shows that (i) participants did feel a high level of control of characters, (ii) creativity was both restricted and facilitated depending on the task and (iii) natural language expedited the learning of a new interface language. Our research provides experimental proof of the effect of a text-based interface and offers guidelines for the design of future computer-aided design applications. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. A Survey on the Procedural Generation of Virtual Worlds
- Author
-
Jonas Freiknecht and Wolfgang Effelsberg
- Subjects
virtual worlds ,procedural content generation ,multimedia content creation ,serious games ,Technology ,Science - Abstract
This survey presents algorithms for the automatic generation of content for virtual worlds, in particular for games. After a definition of the term procedural content generation, the algorithms to generate realistic objects such as landscapes and vegetation, road networks, buildings, living beings and stories are introduced in detail. In our discussion, we emphasize a good compromise between the realism of the objects and the performance of the algorithms. The survey also assesses each generated object type in terms of its applicability in games and simulations of virtual worlds.
- Published
- 2017
- Full Text
- View/download PDF
5. Content Creation, Dissemination and Preservation: Disrupting the Status Quo through Technological Interventions
- Author
-
Panda, Subhajit and Chakravarty, Rupak
- Subjects
bepress|Social and Behavioral Sciences|Library and Information Science|Scholarly Communication ,Content Dissemination ,FreeConferenceCall ,SocArXiv|Education|Online and Distance Education ,Screenleap ,Technological Intervention ,MEGA ,Disruption of Knowledge Creation ,Content Sharing Platform ,bepress|Social and Behavioral Sciences|Library and Information Science ,LOOM ,Content Preservation ,FlasBack Express ,Free Cam ,bepress|Education|Online and Distance Education ,Screen Sharing Tools ,OER Licence ,Dead Simple Screen Sharing ,EDMODO ,Screen Recording Tools ,SocArXiv|Education ,Content Creation ,bepress|Education ,SocArXiv|Social and Behavioral Sciences|Library and Information Science ,bepress|Social and Behavioral Sciences ,SocArXiv|Social and Behavioral Sciences ,SocArXiv|Social and Behavioral Sciences|Library and Information Science|Scholarly Communication ,Multimedia Content Creation ,Mason OER Metafinder ,Disaster Management ,OER Commons - Abstract
Disasters, either natural and man-made, adversely affect humanity including human lives and tangible assets. Knowledge as the indisputable gold standard for growth and progress of civilization also destroyed during such disasters. This should not impediment the creation of knowledge. Instead, efforts should be made to find measures which can ensure the longevity of resources for posterity. In this paper, the authors have discussed tools after evaluation and comparison which can use by learners and educators for creation, dissemination and preservation of e-content, especially multimedia. The tools acknowledged for this motive are either cost-free or the basic version is free, or it offers educators & learners some extended free versions. This paper also highlights the resistance of Indian educators to this changing essence of learning from conventional to online., {"references":["Creative Commons. (2017, November 7). About The Licenses - Creative Commons. Retrieved June 22, 2020, from https://creativecommons.org/licenses/","Dead Simple Screen Sharing. (n.d.). Features. Retrieved June 20, 2020, fromhttps://deadsimplescreensharing.com/features","Edmodo. (2005). Edmodo: The World's Largest Learning Community. Retrieved June 24, 2020, from https://www.edmodo.com/","Field, C. B., Barros, V., Stocker, T. F., Dahe, Q., Dokken, D. J., Ebi, K. L., … Midgley, P. M. (Eds.). (2012). Managing the risks of extreme events and disasters to advance climate change adaption : special report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press.","FlashBack Express. (n.d.). A free screen recorder with advanced features. Retrieved June 21, 2020, from https://www.flashbackrecorder.com/express/","Free Cam. (n.d.). Free Tool for Creating Screencasts. Retrieved June 21, 2020, from https://www.freescreenrecording.com/","FreeConferenceCall. (n.d.-a). Features | FreeConferenceCall.com. Retrieved June 23, 2020, from https://www.freeconferencecall.com/global/in/features","FreeConferenceCall. (n.d.-b). Free Online Meetings & Screen Sharing Software | FreeConferenceCall.com. Retrieved June 23, 2020, from https://www.freeconferencecall.com/global/in/online-meetings","Henney, K. (2020a). Students. Retrieved June 24, 2020, from https://go.edmodo.com/students/","Henney, K. (2020b). Teachers. Retrieved June 24, 2020, from https://go.edmodo.com/teachers/","Hoffmann, R., & Muttarak, R. (2017). Learn from the Past, Prepare for the Future: Impacts of Education and Experience on Disaster Preparedness in the Philippines and Thailand. World Development, 96, 32–51. https://doi.org/10.1016/j.worlddev.2017.02.016","Loom. (2020, March 12). Loom Pro Free for Students and Teachers. Retrieved June 20, 2020, from https://support.loom.com/hc/en-us/articles/360006579637-Loom-Pro-Free-for-Students-and-Teachers","Loom. (n.d.-a). About Us. Retrieved June 20, 2020, from https://www.loom.com/about-us","Loom. (n.d.-b). Choose the plan that's right for you or your team. Retrieved June 20, 2020, website: https://www.loom.com/pricing","Mason Publishing Group. (n.d.). About the Mason OER Metafinder (MOM) – Mason Publishing Group. Retrieved June 22, 2020, from https://publishing.gmu.edu/whos-using-the-mason-oer-metafinder/","MEGA. (n.d.-a). About MEGA - The Privacy Company. Retrieved June 24, 2020, from https://mega.nz/about/main","MEGA. (n.d.-b). MEGAsync. Retrieved June 24, 2020, from https://mega.nz/sync","MEGA. (n.d.-c). Mobile Apps. Retrieved June 24, 2020, from https://mega.nz/mobile","Nguyen, V. A. (2018, September 5). Benefits of Screencasting for Teachers and Students - Atomi Systems, Inc. Retrieved June 21, 2020, from https://atomisystems.com/screencasting/benefits-screencasting-for-teachers-and-students/","OER Commons. (n.d.). Frequently Asked Questions OER for K-12 Educators. Retrieved June 22, 2020, from https://www.oercommons.org/static/staticpages/documents/FAQ-OER-K-12.pdf?f923a5a5e335","Screenleap. (n.d.). Free Screen Sharing. Retrieved June 20, 2020, from https://www.screenleap.com/","Shenoy, V., Mahendher, S., & Vijay, N. (2020). COVID 19 Lockdown Technology Adaption, Teaching, Learning, Students Engagement and Faculty Experience. Mukt Shabd Journal, 9(4), 698–702. Retrieved from https://www.researchgate.net/publication/340609688"]}
- Published
- 2020
- Full Text
- View/download PDF
6. Gamification Mechanics for Playful Virtual Reality Authoring
- Author
-
Naraghi-Taghi-Off, Ramtin, Horst, Robin, and Dörner, Ralf
- Subjects
Multimedia content creation ,Graphical user interfaces ,Computer games ,Applied computing ,Information systems ,Human centered computing ,Virtual reality - Abstract
An increasing number of companies, businesses and educational institutions are becoming familiar with the term gamification, which is about integrating game elements into a non-playful context. Gamification is becoming more important in various fields, such as e-learning, where a person needs to be motivated to be productive. The use of Virtual Reality (VR) is also being researched in various application areas. Authoring of VR content is a complex task that traditionally requires programming or design skills. However, there are authoring applications that do not require such skills but are still complex to use. In this paper, we explore how gamification concepts can be applied to VR authoring to help authors create VR experiences. Using an existing authoring tool for the concept of VR nuggets as an example, we investigate appropriate gamification mechanics to familiarize authors with the tool and motivate them to use it. The proposed concepts were implemented in a prototype and used in a user study. The study report shows that our participants were able to successfully use the gamified authoring prototype and that the participants felt motivated by various gamification aspects, especially visual rewards and story elements., Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference, Virtual Reality and Visualization, 131, 141, Ramtin Naraghi-Taghi-Off, Robin Horst, and Ralf Dörner, CCS Concepts: Human-centered computing --> Graphical user interfaces; Virtual reality;Information systems --> Multimedia content creation;Applied computing --> Computer games
- Published
- 2020
- Full Text
- View/download PDF
7. GAZED - Gaze-guided Cinematic Editing of Wide-Angle Monocular Video Recordings
- Author
-
Moorthy, K. L. Bhanu, Kumar, Moneish, Subramanian, Ramanathan, and Gandhi, Vineet
- Subjects
Combinatorial optimization ,Multimedia content creation ,Gaze potential ,Stage performance ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Keywords ,Shot selection ,Dynamic programming ,Mathematics of computing ,Computing methodologies ,Static wide ,angle recording ,Computational photography ,Information systems ,Human centered computing ,User studies ,Eye gaze ,Cinematic video editing - Abstract
We present GAZED- eye GAZ-guided EDiting for videos captured by a solitary, static, wide-angle and high-resolution camera. Eye-gaze has been effectively employed in computational applications as a cue to capture interesting scene content; we employ gaze as a proxy to select shots for inclusion in the edited video. Given the original video, scene content and user eye-gaze tracks are combined to generate an edited video comprising of cinematically valid actor shots and shot transitions to generate an aesthetic and vivid representation of the original narrative. We model cinematic video editing as an energy minimization problem over shot selection, whose constraints capture cinematographic editing conventions. Gazed scene locations primarily determine the shots constituting the edited video. Effectiveness of GAZED against multiple competing methods is demonstrated via a psychophysical study involving 12 users and twelve performance videos. Professional video recordings of stage performances are typically created by employing skilled camera operators, who record the performance from multiple viewpoints. These multi-camera feeds, termed rushes, are then edited together to portray an eloquent story intended to maximize viewer engagement. Generating professional edits of stage performances is both difficult and challenging. Firstly, maneuvering cameras during a live performance is difficult even for experts as there is no option of retake upon error, and camera viewpoints are limited as the use of large supporting equipment (trolley, crane .) is infeasible. Secondly, manual video editing is an extremely slow and tedious process and leverages the experience of skilled editors. Overall, the need for (i) a professional camera crew, (ii) multiple cameras and supporting equipment, and (iii) expert editors escalates the process complexity and costs. Consequently, most production houses employ a large field-of-view static camera, placed far enough to capture the entire stage. This approach is widespread as it is simple to implement, and also captures the entire scene. Such static visualizations are apt for archival purposes; however, they are often unsuccessful at captivating attention when presented to the target audience. While conveying the overall context, the distant camera feed fails to bring out vivid scene details like close-up faces, character emotions and actions, and ensuing interactions which are critical for cinematic storytelling. GAZED denotes an end-to-end pipeline to generate an aesthetically edited video from a single static, wide-angle stage recording. This is inspired by prior work [GRG14], which describes how a plural camera crew can be replaced by a single highresolution static camera, and multiple virtual camera shots or rushes generated by simulating several virtual pan/tilt/zoom cameras to focus on actors and actions within the original recording. In this work, we demonstrate that the multiple rushes can be automatically edited by leveraging user eye gaze information, by modeling (virtual) shot selection as a discrete optimization problem. Eye-gaze represents an inherent guiding factor for video editing, as eyes are sensitive to interesting scene events [RKH*09,SSSM14] that need to be vividly presented in the edited video. The objective critical for video editing and the key contribution of our work is to decide which shot (or rush) needs to be selected to describe each frame of the edited video. The shot selection problem is modeled as an optimization, which incorporates gaze information along with other cost terms that model cinematic editing principles. Gazed scene locations are utilized to define gaze potentials, which measure the importance of the different shots to choose from. Gaze potentials are then combined with other terms that model cinematic principles like avoiding jump cuts (which produce jarring shot transitions), rhythm (pace of shot transitioning), avoiding transient shots . The optimization is solved using dynamic programming. [MKSG20] refers to the detailed full article., Workshop on Intelligent Cinematography and Editing, Afternoon Session, 35, 36, K. L. Bhanu Moorthy, Moneish Kumar, Ramanathan Subramanian, and Vineet Gandhi, CCS Concepts: Information systems --> Multimedia content creation; Mathematics of computing --> Combinatorial optimization; Computing methodologies --> Computational photography; Human-centered computing --> User studies; Keywords: Eye gaze, Cinematic video editing, Stage performance, Static wide-angle recording, Gaze potential, Shot selection, Dynamic programming
- Published
- 2020
- Full Text
- View/download PDF
8. Procedural content generation
- Author
-
Korda, S. O., Kovaleva, A. G., Корда, С. О., Ковалева, А. Г., Korda, S. O., Kovaleva, A. G., Корда, С. О., and Ковалева, А. Г.
- Abstract
This article presents the brief history of procedural content generation and clarifies the general meaning of this process. After the description of key events and developments in this field, several definitions of the term «procedural content generation» according to researchers are given. Based on these definitions, a variant of a single interpretation of the term is discussed in the paper., Данная статья представляет краткую историю процедурной генерации контента и разъясняет общее значение этого процесса. За описанием ключевых событий и разработок в данной сфере следуют определения термина «процедурная генерация контента», данные исследователями в данной области. На основе этих определений в конце статьи представлен возможный вариант формулировки термина, раскрывающий его значение в общем значении.
- Published
- 2019
9. Процедурная генерация контента
- Author
-
Korda, S. O. and Kovaleva, A. G.
- Subjects
VIRTUAL WORLDS ,ВИРТУАЛЬНЫЕ МИРЫ ,ПРОЦЕДУРНАЯ ГЕНЕРАЦИЯ КОНТЕНТА ,СОЗДАНИЕ МУЛЬТИМЕДИА КОНТЕНТА ,PROCEDURAL CONTENT GENERATION ,MULTIMEDIA CONTENT CREATION - Abstract
This article presents the brief history of procedural content generation and clarifies the general meaning of this process. After the description of key events and developments in this field, several definitions of the term «procedural content generation» according to researchers are given. Based on these definitions, a variant of a single interpretation of the term is discussed in the paper. Данная статья представляет краткую историю процедурной генерации контента и разъясняет общее значение этого процесса. За описанием ключевых событий и разработок в данной сфере следуют определения термина «процедурная генерация контента», данные исследователями в данной области. На основе этих определений в конце статьи представлен возможный вариант формулировки термина, раскрывающий его значение в общем значении.
- Published
- 2019
10. Visual-auditory Representation and Analysis of Molecular Scalar Fields
- Author
-
Malikova, Evgeniya, Adzhiev, Valery, Fryazinov, Oleg, and Pasko, Alexander
- Subjects
Scientific visualization ,Continuous functions ,Hardware ,Multimedia content creation ,Soundbased input / output ,Volumetric models ,Information systems ,centered computing ,Mathematics of computing ,Computing methodologies ,Human - Abstract
The work deals with a visual-auditory representation and an analysis of static and dynamic continuous scalar fields.We propose a general approach and give examples of dynamic and static objects representations related to molecular data simulations. We describe the practical application and demonstrate how the approach may help to track geometrical features., Eurographics 2019 - Posters, Posters, 25, 26, Evgeniya Malikova, Valery Adzhiev, Oleg Fryazinov, and Alexander Pasko, CCS Concepts: Mathematics of computing --> Continuous functions; Human-centered computing --> Scientific visualization; Computing methodologies --> Volumetric models; Information systems --> Multimedia content creation; Hardware --> Soundbased input / output
- Published
- 2019
- Full Text
- View/download PDF
11. Holo Worlds Infinite: procedural spatial aware AR content
- Author
-
Lawrence, Louise M, Hart, Jonathon Derek, Billinghurst, Mark, and International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments Adelaide, Australia 22-24 November 2017
- Subjects
multimedia content creation ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,mixed reality ,augmented reality - Abstract
Refereed/Peer-reviewed We developed an Augmented Reality (AR) application that procedurally generates content which is programmatically placed on the floor. It uses its awareness of its spatial surroundings to generate and place virtual content. We created a prototype that can be used as the basis of a city simulation game that can be played on the floor of any room space, but the approach could also be used for many other applications.
- Published
- 2017
12. Subjective evaluation of an olfaction enhanced immersive virtual reality environment
- Author
-
Egan, Darragh, Keighery, Conor, Barrett, John, Qiao, Yuansong, Timmerer, Christian, and Murray, Niall
- Subjects
Multimedia content creation ,Computer science - Methodology ,Virtual reality ,Software Research Institute AIT - Abstract
Recent1 research efforts have reported findings on user Quality of Experience (QoE) of immersive virtual reality (VR) experiences. Truly immersive multimedia experiences also include multisensory components such as olfaction, tactile etc., in addition to audiovisual stimuli. In this context, this paper reports the results of a user QoE study of an olfaction-enhanced immersive VR environment. The results presented compare the user QoE between two groups (VR vs VR + Olfaction) and consider how the addition of olfaction affected user QoE levels (considering sense of enjoyment, immersion and discomfort). Self-reported measures via post-test questionnaire (10 questions) only revealed one statistically significant difference between the groups; in terms of how users felt with respect to their senses being stimulated. The presence of olfaction in the VR environment did not have a statistically significant effect in terms of user levels of enjoyment, immersion and discomfort.
- Published
- 2017
13. Olfaction-enhanced multimedia: a survey application domains, displays and research challenges
- Author
-
Murray, Niall, Lee, Brian, Qiao, Yuansong, and Muntean, Gabriel-Miro
- Subjects
Multimedia content creation ,Olfactory media ,Mulsemedia ,Olfaction ,Software Research Institute AIT - Abstract
Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step towards further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, a knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area and strengths/weaknesses. State of the art research works involving olfaction are discussed, and in addition, associated research challenges are proposed. yes
- Published
- 2016
14. Subjective evaluation of olfactory and visual media synchronization
- Author
-
Brian Lee, Gabriel-Miro Muntean, A. K. Karunakar, Niall Murray, and Yuansong Qiao
- Subjects
Multimedia ,Multimedia content creation ,Computer science ,media_common.quotation_subject ,Multimedia communications ,Olfaction ,computer.software_genre ,Asynchronous communication ,Perception ,Synchronization (computer science) ,Key (cryptography) ,Relevance (information retrieval) ,Quality (business) ,Quality of experience ,computer ,media_common ,Software Research Institute AIT - Abstract
As a step towards enhancing users' perceived multimedia quality levels beyond the level offered by the classic audiovisual systems, the authors present the results of an experimental study which looked at user's perception of inter-stream synchronization between olfactory data (scent) and video (without relevant audio). The impact on user's quality of experience (by considering enjoyment, relevance and reality) comparing synchronous with asynchronous presentation of olfactory and video media is analyzed and discussed. The aim is to empirically define the temporal boundaries within which users perceive olfactory data and video to be synchronized. The key analysis compares the user detection and perception of synchronization error. State of the art works have investigated temporal boundaries for olfactory data with audiovisual media, but no works document the integration of olfactory data and video (with no related audio). The results of this work show that the temporal boundaries for olfactory and video only are significantly different from olfactory, video and audio. The authors conclude that the absence of contextual audio reduces considerably the acceptable temporal boundary between the scent and video. The results also indicate that olfaction before video is more noticeable to users than olfaction after video and that users are more tolerable of olfactory data after video rather than olfactory data before video. In addition the results show the presence of two main synchronization regions. This work is a step towards the definition of synchronization specifications for multimedia applications based on olfactory and video media. yes
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.