Abstract-Advances in information technology and its widespread growth in several areas
In this study, the authors propose a novel fingerprint template protection scheme that is developed using Delaunay triangulation net constructed from the fingerprint minutiae. The authors propose two methods namely FS_INCIR and FS_AVGLO to construct a feature set from the Delaunay triangles. The feature set computed is quantised and mapped to a 3D array to produce fixed length 1D bit string. This bit string is applied with a DFT to generate a complex vector. Finally, the complex vector is multiplied by user's key to generate a cancellable template. The proposed computation of feature set maintained a good balance between security and performance. These methods are tested on FVC 2002 and FVC 2004 databases and the experimental results show satisfactory performance. Further, the authors analysed the four requirements namely diversity, revocability, irreversibility and accuracy for protecting biometric templates. Thus, the feasibility of proposed scheme is depicted.
Summary Hadoop distributed file system (HDFS) and MapReduce model have become popular technologies for large‐scale data organization and analysis. Existing model of data organization and processing in Hadoop using HDFS and MapReduce are ideally tailored for search and data parallel applications, for which there is no need of data dependency with its neighboring/adjacent data. However, many scientific applications such as image mining, data mining, knowledge data mining, and satellite image processing are dependent on adjacent data for processing and analysis. In this paper, we identify the requirements of the overlapped data organization and propose a two‐phase extension to HDFS and MapReduce programming model, called XHAMI, to address them. The extended interfaces are presented as APIs and implemented in the context of image processing application domain. We demonstrated effectiveness of XHAMI through case studies of image processing functions along with the results. Although XHAMI has little overhead in data storage and input/output operations, it greatly enhances the system performance and simplifies the application development process. Our proposed system, XHAMI, works without any changes for the existing MapReduce models and can be utilized by many applications where there is a requirement of overlapped data. Copyright © 2016 John Wiley & Sons, Ltd.
Abstract-Cloud computing is a promising cost efficient service oriented computing platform in the fields of science, engineering, business and social networking for delivering the resources on demand. Big Data Clouds is a new generation data analytics platform using Cloud computing as a back end technologies, for information mining, knowledge discovery and decision making based on statistical and empirical tools. MapReduce scheduling models for Big Data computing operate in the cluster mode, where the data nodes are pre-configured with the computing facility for processing. These MapReduce models are based on compute push model-pushing the logic to the data node for analysis, which is primarily for minimizing or eliminating data migration overheads between computing resources and data nodes. Such models, however, substantially perform well in the cluster setups, but are infelicitous for the platforms having the decoupled data storage and computing resources. In this paper, we propose a Genetic Algorithm based scheduler for such Big Data Cloud where decoupled computational and data services are offered as services. The approach is based on evolutionary methods focussed on data dependencies, computational resources and effective utilization of bandwidth thus achieving higher throughputs.
In data grids, the fast and proper replica selection decision leads to better resource utilization due to reduction in latencies to access the best replicas and speed up the execution of the data grid jobs. In this paper, we propose a new strategy that improves replica selection in data grids with the help of the reduct concept of the Rough Set Theory (RST). Using Quickreduct algorithm the unsupervised clustering is changed into supervised reducts. Then, Rule algorithm is used for obtaining optimum rules to derive usage patterns from the data grid information system. The experiments are carried out using Rough Set Exploration System (RSES) tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.