Requirements prioritization is considered as one of the most important approaches in the requirement engineering process. Requirements prioritization is used to define the ordering or schedule for executing requirement based on their priority or importance with respect to stakeholders' viewpoints. Many prioritization techniques for requirement have been proposed by researchers, and there is no single technique can be used for all projects types. In this paper we give an overview of the requirement process and requirement prioritization concept. We also present the most popular techniques used to prioritize the software project requirements and a compression between these techniques. On the other hand, we spot the light on the importance of involving the non-functional requirements prioritization because of the great effects of non-functional on project success and quality; some approaches that used in prioritize non-functional requirements are discussed in this paper, in addition a general model is proposed based on reviewing the prioritization techniques in order to suggests a best suited technique for specific projects according to decision makers parameters.
Distributed Data Mining (DDM) has been proposed as a means to deal with the analysis of distributed data, where DDM discovers patterns and implements prediction based on multiple distributed data sources. However, DDM faces several problems in terms of autonomy, privacy, performance and implementation. DDM requires homogeneity regarding environment, control, administration and the classification algorithm(s), and such that requirements are too strict and inflexible in many applications. In this paper, we propose the employment of a Multi-Agent System (MAS) to be combined with DDM (MAS-DDM). MAS is a mechanism for creating goal-oriented autonomous agents within shared environments with communication and coordination facilities. We shall show that MAS-DDM is both desirable and beneficial. In MAS-DDM, agents could communicate their beliefs (calculated classification) by covering private and non-sharable data, and other agents decide whether the use of such beliefs in classifying instances and adjusting their prior assumptions about each class of data. In MAS-DDM, we will develop and use a modified Naive Bayesian algorithm because (1) Naive Bayesian has been shown to be the most used algorithm to deal with uncertain data, and (2) to show that even if all agents in MAS-DDM use the same algorithm, MAS-DDM preforms better than DDM approaches with non-communicating processes. Point (2) provide an evidence that the exchange of information between agents helps in increasing the accuracy of the classification task significantly.
Fog-computing is a new network architecture and computing paradigm that uses user or near-users devices (network edge) to carry out some processing tasks. Accordingly, it extends the cloud computing with more flexibility the one found in the ubiquitous networks. A smart city based on the concept of fog-computing with flexible hierarchy is proposed in this paper. The aim of the proposed design is to overcome the limitations of the previous approaches, which depends on using various network architectures, such as cloud-computing, autonomic network architecture and ubiquitous network architecture. Accordingly, the proposed approach achieves a reduction of the latency of data processing and transmission with enabled real-time applications, distribute the processing tasks over edge devices in order to reduce the cost of data processing and allow collaborative data exchange among the applications of the smart city. The design is made up of five major layers, which can be increased or merged according to the amount of data processing and transmission in each application. The involved layers are connection layer, real-time processing layer, neighborhood linking layer, main-processing layer, data server layer. A case study of a novel smart public car parking, traveling and direction advisor is implemented using IFogSim and the results showed that reduce the delay of real-time application significantly, reduce the cost and network usage compared to the cloud-computing paradigm. Moreover, the proposed approach, although, it increases the scalability and reliability of the users’ access, it does not sacrifice much time, nor cost and network usage compared to fixed fog-computing design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.