Building Information Modeling (BIM) employs data-rich 3D CAD models for large-scale facility design, construction, and operation. These complex datasets contain a large amount and variety of information, ranging from design specifications to real-time sensor data. They are used by architects and engineers for various analysis and simulations throughout a facility's life cycle. Many techniques from different visualization fields could be used to analyze these data. However, the BIM domain still remains largely unexplored by the visualization community. The goal of this article is to encourage visualization researchers to increase their involvement with BIM. To this end, we present the results of a systematic review of visualization in current BIM practice. We use a novel taxonomy to identify main application areas and analyze commonly employed techniques. From this domain characterization, we highlight future research opportunities brought forth by the unique features of BIM. For instance, exploring the synergies between scientific and information visualization to integrate spatial and non-spatial data. We hope this article raises awareness to interesting new challenges the BIM domain brings to the visualization community.
The number of connected devices and the amount of data traffic exchanged through mobile networks is expected to double in the near future. Long Term Evolution (LTE) and fifth generation (5G) technologies are evolving to support the increased volume, variety and velocity of data and new interfaces the Internet of Things demands. 5G goes beyond increasing data throughput, providing broader coverage and reliable
The cloud data center is a complex system composed of power, cooling, and IT subsystems. The power subsystem is crucial to feed the IT equipment. Power disruptions may result in service unavailability. This paper analyzes the impact of the power subsystem failures on IT services regarding different architecture configurations based on TIA-942 standard such as non-redundant, redundant, concurrently maintainable, and fault tolerant. We model both subsystems, power and IT, through Stochastic Petri Net (SPN). The availability results show that a fault tolerant power and IT configuration reduces the downtime from 54.1 to 34.5 hours/year when compared to a nonredundant architecture. The sensibility analysis results show that the failure and repair rates of the server component in a fault tolerant system present the highest impact on overall data center availability.
The network function virtualization (NFV) paradigm is an emerging technology that provides network flexibility by allowing the allocation of network functions over commodity hardware, like legacy servers in an IT infrastructure. In comparison with traditional network functions, implemented by dedicated hardware, the use of NFV reduces the operating and capital expenses and improves service deployment. In some scenarios, a complete network service can be composed of several functions, following a specific order, known as a service function chain (SFC). SFC placement is a complex task, already proved to be NP-hard. Moreover, in highly distributed scenarios, the network performance can also be impacted by other factors, such as traffic oscillations and high delays. Therefore, a given SFC placement strategy must be carefully developed to meet the network operator service constraints. In this paper, we present a systematic review of SFC placement advances in dis-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.