Nowadays vast amounts of multimedia content is being produced, archived and digitised, resulting in great troves of data of interest. Examples include user-generated content, such as images, videos, text and audio posted by users on social media and wikis, or content provided through official publishers and distributors, such as digital libraries, organisations and online museums. This digital content can serve as a valuable source of inspiration to the creative industries, such as architecture and gaming, to produce new innovative assets or to enhance and (re-)use existing ones. However, in its current form, this content is difficult to be reused and repurposed due to the lack of appropriate solutions for its retrieval, analysis and integration into the design process. In this paper we present V4Design, a novel framework for the automatic content analysis, linking and seamless transformation of heterogeneous multimedia content to help architects and virtual reality game designers establish innovative value chains and end-user applications. By integrating and intelligently combining state-of-the-art technologies in computer vision, 3D generation, text analysis, generation and semantic integration and interlinking, V4Design provides architects and video game designers with innovative tools to draw inspiration from archive footage and documentaries, inspiring and eventually supporting the design process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.