Our purpose in this research is to develop a methodology to automatically and efficiently classify web images as UML static diagrams, and to produce a computer tool that implements this function. The tool receives as input a bitmap file (in different formats) and tells whether the image corresponds to a diagram. The tool does not require that the images are explicitly or implicitly tagged as UML diagrams. The tool extracts graphical characteristics from each image (such as grayscale histogram, color histogram and elementary geometric forms) and uses a combination of rules to classify it. The rules are obtained with machine learning techniques (rule induction) from a sample of 19000 web images manually classified by experts. In this work we do not consider the textual contents of the images.
Our purpose in this research is to develop a method to automatically and efficiently classify web images as Unified Modeling Language (UML) static diagrams, and to produce a computer tool that implements this function. The tool receives a bitmap file (in different formats) as an input and communicates whether the image corresponds to a diagram. For pragmatic reasons, we restricted ourselves to the simplest kinds of diagrams that are more useful for automated software reuse: computer-edited 2D representations of static diagrams. The tool does not require that the images are explicitly or implicitly tagged as UML diagrams. The tool extracts graphical characteristics from each image (such as grayscale histogram, color histogram and elementary geometric forms) and uses a combination of rules to classify it. The rules are obtained with machine learning techniques (rule induction) from a sample of 19,000 web images manually classified by experts. In this work, we do not consider the textual contents of the images. Our tool reaches nearly 95% of agreement with manually classified instances, improving the effectiveness of related research works. Moreover, using a training dataset 15 times bigger, the time required to process each image and extract its graphical features (0.680 s) is seven times lower.
Abstract. The use of different techniques and tools is a common practice to cover all stages in the development lifecycle generating a very good number of work products under different formats and protocols. This situation implies that a collaborative environment may have the potential of integrating all these data sources creating a real knowledge graph of the development lifecycle easing tasks such as traceability, V&V or quality management. In this light, the OSLC (Open Services for Lifecycle Collaboration) initiative pursues the creation of specifications to unify under a common and shared data model (the Resource Description Framework-RDF) all pieces of information and data involved in the development of a system. The present paper introduces the concept of knowledge management, outlines an OSLC-based specification in the context of Systems Engineering and presents a case study exposing product scheme classifications.
Quality management of requirements has seen a dramatic increase of the amount of applications, management platforms, data, etc. gaining momentum in the Systems Engineering area and more specifically in the deployment of the next wave of critical system. In this context, one of the next big things lies in the creation of quality functions that can automatically detect and make decisions according to natural‐language based requirements specifications and models. In this sense quality indicator of requirements seeks for providing an intelligent environment for detecting values of such as correctness, consistency and completeness based on domain knowledge in which both functional and nonfunctional properties of system components can be validated and verified easing the transition to a smart system environment. Thus the testing of critical systems based on requirements quality can be seen as a special kind of policy‐making strategy that must compile several key indicators to summarize data and information and obtain an objective quantitative measure. Nevertheless the quantitative analysis of several quality indicators is becoming a challenging task due to natural language ambiguities and a tangled/heterogeneous environment of data, providers, etc. Existing tools and techniques based on traditional processes of quality assessment are preventing a proper use of the new dynamic and data environment avoiding more timely, adaptable and flexible (on‐demand) quantitative index creation and, as a consequence, more accurate decisions. On the other hand, semantic‐based technologies emerge to provide the adequate building blocks to represent domain‐knowledge and process data in a flexible fashion using a common and shared data model. That is why the present paper introduces a Resource Description Framework (RDF) vocabulary to semantically represent and compute quantitative indexes as part of the implementation of the Open Services for Lifecycle Collaboration initiative (OSLC) Quality Management specification. Finally some discussion, conclusions and future work are also outlined.
Abstract. The present paper debates the concept of requirement in the contexts of software and systems engineering. Requirements are usually extracted from documents, stakeholders and existing systems generating a natural language specification. On the other hand system models are designed to define system behavior and to validate and verify the system against a specification. That is why requirements, as a basic unit of specification, must be aligned and linked to a model with the aim of supporting the whole development lifecycle of any product or service. Although some Model-based systems engineering (MBSE) methodologies have tried to bridge the gap between natural language and models moving from document-centric to model or requirement-based approaches, there is still a lack of interoperability and integration between requirements and models that prevent a proper (re) use of requirements in the whole lifecycle. Furthermore this situation is becoming a major challenge in critical systems since a complete verification and validation must be ensured. In this light the Open Services for Lifecycle Collaboration (OSLC) initiative pursues the creation of specifications to unify under a common and shared data model (the Resource Description Framework-RDF) all pieces of information and data involved in the development of a system. This initiative applies the principles of Semantic Web and the Linked Data initiative to provide a web standards-based environment for systems development. More specifically, OSLC defines the Requirements Management specification to deliver a common environment for managing and exchanging requirements. In this sense the on-going work implementing this specification and an example of modeling a controlled vocabulary for requirements management under the principles of the Linked Data and OSLC specifications is provided to demonstrate the capabilities and intentions of both initiatives. Finally some discussion, conclusions and future directions are also presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.