Access control models are an important tool developed for securing today’s data systems. Institutions use the access control models specifically to define who their employees are, what they can do, which resources they can reach, and which processes they can perform and use them to manage the whole process. This is a very hard and costly process for institutions with distributed database systems. However, access control models cannot be implemented in a qualified way due to the fact that the conditions for defining users’ demands to reach resources distributed on different servers, one of which is consequentially bound to the other, the verification and authorization of those user demands, and being able to monitor the actions of the users cannot be configured in an efficient way all the time. With our model suggested in this study, the aim is to automatically calculate the permissions and access levels of all users defined in the distributed database systems for the objects, and, in this way, we will reach a more efficient decision as to which objects the users can access while preventing their access to the information they do not need. Our proposed model in this study has been applied to real life data clusters from organizations providing health and education services and a public service. With the proposed model, all models have been run on servers sharing resources in a private network. The performance of the proposed model has been compared to that of traditional access models. It was confirmed that the proposed model presented an access control model providing more accurate access level results as well as being scalable to many distributed database systems.
In several application, emotion recognition from the speech signal has been research topic since many years. To determine the emotions from the speech signal, many systems have been developed. To solve the speaker emotion recognition problem, hybrid model is proposed to classify five speech emotions, including anger, sadness, fear, happiness and neutral. The aim this study of was to actualize automatic voice and speech emotion recognition system using hybrid model taking Turkish sound forms and properties into consideration. Approximately 3000 Turkish voice samples of words and clauses with differing lengths have been collected from 25 males and 25 females. In this study, an authentic and unique Turkish database has been used. Features of these voice samples have been obtained using Mel Frequency Cepstral Coefficients (MFCC) and Mel Frequency Discrete Wavelet Coefficients (MFDWC). Moreover, spectral features of these voice samples have been obtained using Support Vector Machine (SVM). Feature vectors of the voice samples obtained have been trained with such methods as Gauss Mixture Model(GMM), Artifical Neural Network (ANN), Dynamic Time Warping (DTW), Hidden Markov Model (HMM) and hybrid model(GMM with combined SVM). This hybrid model has been carried out by combining with SVM and GMM. In first stage of this model, with SVM has been performed subsets obtained vector of spectral features. In the second phase, a set of training and tests have been formed from these spectral features. In the test phase, owner of a given voice sample has been identified taking the trained voice samples into consideration. Results and performances of the algorithms employed in the study for classification have been also demonstrated in a comparative manner.
In our age, technological developments are accompanied by certain problems associated with them. Security takes the first place amongst such kind of problems. In particular, such biometric systems as authentication constitute the significant fraction of the security matters. This is because sound recordings having connection with the various crimes are required to be analyzed for forensic purposes. Authentication systems necessitate transmission, design and classification of biometric data in a secure manner. In this study, analysis of German language employed in the economy, industry and trade in a wide spread manner, has been performed. In the same vein, the aim was to actualize automatic voice and speech recognition system using Mel Frequency Cepstral Coefficients (MFCC), MelFrequency Discrete Wavelet Coefficients (MFDWC) and Linear. Prediction Cepstral Coefficient (LPCC) taking German sound forms and properties into consideration. Approximately 2658 German voice samples of words and clauses with differing lengths have been collected from 50 males and 50 females. Features of these voice samples have been obtained using wavelet transform. Feature vectors of the voice samples obtained have been trained with such methods as Boltzmann Machines and Deep Belief Networks. In the test phase, owner of a given voice sample has been identified taking the trained voice samples into consideration. Results and performances of the algorithms employed in the study for classification have been also demonstrated in a comparative manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.