Performing plays or creating films and animations is a complex creative, and thus expensive, process involving various professionals and media. This research project proposes to augment this process by automatically interpreting film and play scripts and automatically generating animated scenes from them. For this purpose a web based software prototype, SceneMaker, will be developed. During the generation of the story content, special attention will be given to emotional aspects and their reflection in the execution of all types of modalities: fluency and manner of actions and behaviour, body language, facial expressions, speech, voice pitch, scene composition, timing, lighting, music and camera. The main objective of this work is to demonstrate how a scene and actor behaviour changes, when emotional states are taken into account, e.g. walking down a street in a happy versus a sad state. Consequently, more realistic and believable story visualisations are an expected outcome. Literature on related research areas is reviewed involving natural language and text, screenplay or play script, layout processing with regard to personality and emotion detection, modelling affective behaviour of embodied agents, visualisation of 3D scenes with digital cinematography and genre-based presentation, intelligent multimedia selection and mobile 3D technology.SceneMaker's architecture with a language and text analysis module, a reasoning and decision making module based on cognitive, emotional information and a multimedia module for multimodal visualisation are presented along with a development plan and test scenarios. Technologies and software relevant for the development of SceneMaker are analysed and prospective tools for text and language processing, visualisation, media allocation and mobile user interfaces are suggested. Accuracy of content animation, effectiveness of expression and usability of the interface will be evaluated in empirical tests. In relation to other work, this project will present a genre-specific text-to-animation methodology which combines all relevant expressive modalities. Emotional expressivity inspired by the OCC (Ortony, Clore and Collins) emotion model will influence all modalities to enhance believability of virtual actors as well as scene presentation. Compared to other scene production systems, SceneMaker will infer emotions from the story context, rather than relying on explicit emotion keywords, SceneMaker will automatically detect genre from the script and apply appropriate cinematic direction. Also SceneMaker will enable 3D animation editing via web-based and mobile platforms.In conclusion, SceneMaker will reduce production time, save costs and enhance communication of ideas through the development of an intelligent, multimodal animation generation system providing quick previsualisations of scenes from script text input.