Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Data models provide the foundation to organization's activities since they support the organization's systems and data. Therefore, the quality of the data models is foremost. We describe a methodology to measure the quality of conceptual data models created using a fact oriented data modeling called Fully Communication Oriented Information Modeling (FCO-IM). The measurement method is based on the framework to measure the quality of conceptual model by Lindland et al. Four components are to be considered in the measurement: domain, model, language, and audience interpretation. The quality are measured on three aspects: syntactic quality (measured by syntax correctness), semantic quality (measured by feasible validity and feasible completeness), and pragmatic quality (measured by feasible comprehension). The method is then used to determine the quality of several FCO-IM conceptual data models that were created using a pattern language of conceptual data models, a new method in data modeling that we are currently researching. The method contributes in data modeling area by providing a quantitative and instructive way of measuring the quality of conceptual data models, especially in FCO-IM. Keywords: conceptual model, conceptual data model, data modeling, FCO-IM, measurement, pattern, quality IntroductionInformation is one of the critical assets of a modern organization. Information is extracted from data stored in database systems. An aspect of data management is the definition of the structures of data. The structures of data are designed in an activity called data modeling and the results are data models. Data models provide the foundation to organization's activities since they support the organization's systems and data [20]. Therefore, the quality of the data models is foremost.Data model is a collection of conceptual tools for describing data, data relationships, data semantics, and consistency constraints [16]. Three levels of data models are defined [19]: conceptual, logical, and physical data model. Conceptual data model is a relatively technologyindependent specification of data structures and is close to business requirements [19]. We focus on conceptual data model rather than the logical or physical data model because a conceptual data model can be viewed as the translation of business requirements into technical form of the structures of data (thus, it serves a "link" between human and machine) and providing logical and physical data model is a matter of transforming the conceptual data model using established algorithms. Thus, the challenge is how to provide high quality conceptual data models.
Data models provide the foundation to organization's activities since they support the organization's systems and data. Therefore, the quality of the data models is foremost. We describe a methodology to measure the quality of conceptual data models created using a fact oriented data modeling called Fully Communication Oriented Information Modeling (FCO-IM). The measurement method is based on the framework to measure the quality of conceptual model by Lindland et al. Four components are to be considered in the measurement: domain, model, language, and audience interpretation. The quality are measured on three aspects: syntactic quality (measured by syntax correctness), semantic quality (measured by feasible validity and feasible completeness), and pragmatic quality (measured by feasible comprehension). The method is then used to determine the quality of several FCO-IM conceptual data models that were created using a pattern language of conceptual data models, a new method in data modeling that we are currently researching. The method contributes in data modeling area by providing a quantitative and instructive way of measuring the quality of conceptual data models, especially in FCO-IM. Keywords: conceptual model, conceptual data model, data modeling, FCO-IM, measurement, pattern, quality IntroductionInformation is one of the critical assets of a modern organization. Information is extracted from data stored in database systems. An aspect of data management is the definition of the structures of data. The structures of data are designed in an activity called data modeling and the results are data models. Data models provide the foundation to organization's activities since they support the organization's systems and data [20]. Therefore, the quality of the data models is foremost.Data model is a collection of conceptual tools for describing data, data relationships, data semantics, and consistency constraints [16]. Three levels of data models are defined [19]: conceptual, logical, and physical data model. Conceptual data model is a relatively technologyindependent specification of data structures and is close to business requirements [19]. We focus on conceptual data model rather than the logical or physical data model because a conceptual data model can be viewed as the translation of business requirements into technical form of the structures of data (thus, it serves a "link" between human and machine) and providing logical and physical data model is a matter of transforming the conceptual data model using established algorithms. Thus, the challenge is how to provide high quality conceptual data models.
The use of primary source materials is recognized as key to supporting history and social studies education. The extensive digitization of library, museum, and other cultural heritage collections represents an important teaching resource. Yet, searching and selecting digital primary sources appropriate for classroom use can be difficult and time-consuming. This study investigates the design requirements and the potential usefulness of a domain-specific ontology to facilitate access to, and use of, a collection of digital primary source materials developed by the Library of the University of North Carolina at Chapel Hill. During a three-phase study, an ontology model was designed and evaluated with the involvement of social studies teachers. The findings revealed that the design of the ontology was appropriate to support the information needs of the teachers and was perceived as a potentially useful tool to enhance collection access. The primary contribution of this study is the introduction of an approach to ontology development that is user-centered and designed to facilitate access to digital cultural heritage materials. Such an approach should be considered on a case-by-case basis in relation to the size of the ontology being built, the nature of the knowledge domain, and the type of end users targeted.
The Standish Group Reports 83.9% of IT Projects fail, and one of the top factors in failed projects is the incomplete requirements or user stories. Therefore, it is essential to teach undergraduate students from computer science degree programs how to create complete user stories. Computer science programs include some subjects or topics involving requirements or user stories collection and writing, such as Requirements Engineering, Software Engineering, Project Management, or Quality Software Assurance. For that reason, we designed a web application called User Story Quality Analyzer (USQA) that uses Natural Language Processing modules to detect errors regarding aspects of usefulness, completeness, and polysemes in the user stories creation. The tool was proved from three perspectives: (1) a reliability test, where 35 user stories developed by experts were tested in the app to prove the prototype's reliability; (2) usability and utility analysis; 48 students interacted with the tool and responded a Satisfaction Usability Scale and an open‐ended question, the students reported a high usability score; (3) finally, error classification, we gathered 159 user stories processed by the system, and we classified the students' common errors considering incompleteness and polysemes. After the evaluations, we concluded that USQA could evaluate the user stories as an expert, which could help the professors/teachers/instructors in their courses by providing feedback to the students when they are writing user stories.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.