Geospatial information has been indispensable for many application fields, including traffic planning, urban planning, and energy management. Geospatial data are mainly stored in relational databases that have been developed over several decades, and most geographic information applications are desktop applications. With the arrival of big data, geospatial information applications are also being modified into, e.g., mobile platforms and Geospatial Web Services, which require changeable data schemas, faster query response times, and more flexible scalability than traditional spatial relational databases currently have. To respond to these new requirements, NoSQL (Not only SQL) databases are now being adopted for geospatial data storage, management, and queries. This paper reviews state-of-the-art geospatial data processing in the 10 most popular NoSQL databases. We summarize the supported geometry objects, main geometry functions, spatial indexes, query languages, and data formats of these 10 NoSQL databases. Moreover, the pros and cons of these NoSQL databases are analyzed in terms of geospatial data processing. A literature review and analysis showed that current document databases may be more suitable for massive geospatial data processing than are other NoSQL databases due to their comprehensive support for geometry objects and data formats and their performance, geospatial functions, index methods, and academic development. However, depending on the application scenarios, graph databases, key-value, and wide column databases have their own advantages.
Construction and demolition waste (C&D waste) are widely recognized as the main form municipal solid waste, and its recycling and reuse are an important issue in sustainable city development. Material flow analysis (MFA) can quantify materials flows and stocks, and is a useful tool for the analysis of construction and demolition waste management. In recent years, material flow analysis has been continually researched in construction and demolition waste processing considering both single waste material and mixed wastes, and at regional, national, and global scales. Moreover, material flow analysis has had some new research extensions and new combined methods that provide dynamic, robust, and multifaceted assessments of construction and demolition waste. In this paper, we summarize and discuss the state of the art of material flow analysis research in the context of construction and demolition waste recycling and disposal. Furthermore, we also identify the current research gaps and future research directions that are expected to promote the development of MFA for construction and demolition waste processing in the field of sustainable city development.
Automated Compliance Checking (ACC) of building/construction projects is one of important applications in Architecture, Engineering and Construction (AEC) industry, because it provides the checking processes and results of whether a building design complies with relevant laws, policies and regulations. Currently, Automated Compliance Checking still involves lots of manual operations, massive time and cost consumption. Additionally, some sub-tasks of ACC have been researched, while few studies can automatically implement the whole ACC process. To solve related issues, we proposed a semantic approach to implement the whole ACC process as automatically as possible, in which Natural Language Processing (NLP) is used to extract rule terms and logic relationships among these terms from text regulatory documents. Rule terms are mapped to keywords (concepts or properties) in BIM data through term matching and semantic similarity analysis. After that, according to the mapped keywords in BIM and logic relationships among keywords, a corresponding SPARQL query is automatically generated. The query results can be non-compliance or compliance with rules based on the generated SPARQL query and requirements of stakeholders. The cases study proves the proposed approach can provide the flexible and effective rule checking for BIM data. In addition, based on the proposed approach, we also further develop a semantic framework to implement automated rule compliance checking in construction industry.INDEX TERMS Automated Compliance Checking, data extraction, ifcOWL, natural language processing, SPARQL generation.
Generally, building information modelling (BIM) models contain multiple dimensions of building information, including building design data, construction information, and maintenance-related contents, which are related with different engineering stakeholders. Efficient extraction of BIM data is a necessary and vital step for various data analyses and applications, especially in large-scale BIM projects. In order to extract BIM data, multiple query languages have been developed. However, the use of these query languages for data extraction usually requires that engineers have good programming skills, flexibly master query language(s), and fully understand the Industry Foundation Classes (IFC) express schema or the ontology expression of the IFC schema (ifcOWL). These limitations have virtually increased the difficulties of using query language(s) and raised the requirements on engineers’ essential knowledge reserves in data extraction. In this paper, we develop a simple method for automatic SPARQL (SPARQL Protocol and RDF Query Language) query generation to implement effective data extraction. Based on the users’ data requirements, we match users’ requirements with ifcOWL ontology concepts or instances, search the connected relationships among query keywords based on semantic BIM data, and generate the user-desired SPARQL query. We demonstrate through several case studies that our approach is effective and the generated SPARQL queries are accurate.
STEP product model is an effort to develop an international product modeling standard by the International Standards Organization (ISO). The product model involves both working standard and information models. As product accuracy becomes tighter, dimensioning and tolerancing (D&T) plays an important role in product lifecycle. It is an important issue to develop a D&T data model based on STEP. This paper describes the development of a STEP-based D&T data model. Required information to build a STEP-based D&T scheme is investigated. Implementation of such a data model and construction of a product D&T specifications based on the model are then described. Three examples, including a single part with an interface for tolerance analysis application and an assembled product, are used to illustrate the usage of the data model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.