Performing plays or creating films and animations is a complex creative, and thus expensive, process involving various professionals and media. This research project proposes to augment this process by automatically interpreting film and play scripts and automatically generating animated scenes from them. For this purpose a web based software prototype, SceneMaker, will be developed. During the generation of the story content, special attention will be given to emotional aspects and their reflection in the execution of all types of modalities: fluency and manner of actions and behaviour, body language, facial expressions, speech, voice pitch, scene composition, timing, lighting, music and camera. The main objective of this work is to demonstrate how a scene and actor behaviour changes, when emotional states are taken into account, e.g. walking down a street in a happy versus a sad state. Consequently, more realistic and believable story visualisations are an expected outcome. Literature on related research areas is reviewed involving natural language and text, screenplay or play script, layout processing with regard to personality and emotion detection, modelling affective behaviour of embodied agents, visualisation of 3D scenes with digital cinematography and genre-based presentation, intelligent multimedia selection and mobile 3D technology.SceneMaker's architecture with a language and text analysis module, a reasoning and decision making module based on cognitive, emotional information and a multimedia module for multimodal visualisation are presented along with a development plan and test scenarios. Technologies and software relevant for the development of SceneMaker are analysed and prospective tools for text and language processing, visualisation, media allocation and mobile user interfaces are suggested. Accuracy of content animation, effectiveness of expression and usability of the interface will be evaluated in empirical tests. In relation to other work, this project will present a genre-specific text-to-animation methodology which combines all relevant expressive modalities. Emotional expressivity inspired by the OCC (Ortony, Clore and Collins) emotion model will influence all modalities to enhance believability of virtual actors as well as scene presentation. Compared to other scene production systems, SceneMaker will infer emotions from the story context, rather than relying on explicit emotion keywords, SceneMaker will automatically detect genre from the script and apply appropriate cinematic direction. Also SceneMaker will enable 3D animation editing via web-based and mobile platforms.In conclusion, SceneMaker will reduce production time, save costs and enhance communication of ideas through the development of an intelligent, multimodal animation generation system providing quick previsualisations of scenes from script text input.
Abstract. Our proposed software system, SceneMaker, aims to facilitate the production of plays, films or animations by automatically interpreting natural language film scripts and generating multimodal, animated scenes from them. During the generation of the story content, SceneMaker will give particular attention to emotional aspects and their reflection in fluency and manner of actions, body posture, facial expressions, speech, scene composition, timing, lighting, music and camera work. Related literature and software on Natural Language Processing, in particular textual affect sensing, affective embodied agents, visualisation of 3D scenes and digital cinematography are reviewed. In relation to other work, SceneMaker will present a genre-specific text-to-animation methodology which combines all relevant expressive modalities. In conclusion, SceneMaker will enhance the communication of creative ideas providing quick pre-visualisations of scenes.
Producing plays, films or animations is a complex and expensive process involving various professionals and media. Our proposed software system, SceneMaker, aims to facilitate this creative process by automatically interpreting natural language film scripts and generating multimodal, animated scenes from them. During the generation of the story content, SceneMaker gives particular attention to emotional aspects and their reflection in fluency and manner of actions, body posture, facial expressions, speech, scene composition, timing, lighting, music and camera work. Related literature and software on Natural Language Processing, in particular textual affect sensing, affective embodied agents, visualisation of 3D scenes and digital cinematography are reviewed. In relation to other work, SceneMaker follows a genre-specific text-to-animation methodology which combines all relevant expressive modalities and is made accessible via web-based and mobile platforms. In conclusion, SceneMaker will enhance the communication of creative ideas providing quick pre-visualisations of scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.