Visualization tools like CAD help architects to develop their projects, but they're not always successful in doing that. For one, the tools are too complex and the digital drawings and models produced are still 2D screen bounded, which makes it difficultfor collaborators and clients as well as the architects to get a real, accurate sense of how design will look like, function, and take up space in the reality. According to the current way how CAD works for both visualization and prototypingit has tremendous limitations. Every architect using CAD face problem of limitations resulting in misunderstanding between person who look at the visualization and own architects spatial view. 3D simulation on the 2D screen causes difficulties in experiencing scale, contextual elements and depth. However it is not proved that CAD representation results in wrong perception, since each individual human spatial perception has ability to intuitively compensate scale and depth issues individually that may differ from architects one. Thus, VR has the possibility to avoid pitfalls and provide natural and perception-friendly visualization. The epoch of CAD is ending-Virtual reality (VR) will be next digital visualization standard [1; 2]. This article explore the way and experience how architects will use the new methods of visualization using modern VR approach and expanding from thisinvestigation of the VR relationship and possibility to integrate into the architectural workflow. Article evaluates how differently CAD and VR affection results in the final space design and proves the CADs insufficiency in spatial visualization.
The article analyzes the main methods of artificial intelligence in the task of recognizing drawings and transforming a 2D model into a 3D model. With the rapid development of information technologies, and especially in the pursuit of the most realistic reproduction of the project of the future product/house and other objects in digital form, the question of recognizing drawings and transforming a 2D model into a 3D model is very acute. As the number and complexity of tasks arising from the digitization of existing paper-based drawing and technical documentation grows, and the parallel need to transform two-dimensional models into three-dimensional models for visualization in three-dimensional space of complex objects, researchers have drawn attention to the possibilities of applying technologies and systems of artificial intelligence in the processes of drawing recognition and transformation of two-dimensional models into three-dimensional models. The first studies devoted to the application of artificial intelligence in the tasks of recognizing images on drawings began to appear in the early 90s of the 20th century. The analysis of approaches to the recognition of drawings allows us to consider the potential of using different methods of artificial intelligence in the task of recognizing drawings and transforming two-dimensional models into three-dimensional models. To analyze the potential of improving the work of CNN, as well as its architecture, without resorting to extensive expansion of the convolutional neural network (CNN) architecture, as well as taking into account the need to solve the task related to the logical vectorization of primitives and/or conditional graphics recognized by means of a convolutional neural network markings on drawings to perform 2D to 3D transformation. In the future, this stimulates researchers to look for alternative methods and models for image recognition systems on drawings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.