The relevance of this type of network is associated with the development and improvement of protocols, methods, and tools to verify routing policies and algorithmic models describing various aspects of SDN, which determined the purpose of this study. The main purpose of this work is to develop specialized methods to estimate the maximum end-to-end delay during packet transmission using SDN infrastructure. The methods of network calculus theory are used to build a model for estimating the maximum transmission delay of a data packet. The basis for this theory is obtaining deterministic evaluations by analyzing the best and worst-case scenarios for individual parts of the network and then optimally combining the best ones. It was found that the developed method of theoretical evaluation demonstrates high accuracy. Consequently, it is shown that the developed algorithm can estimate SND performance. It is possible to conclude the configuration optimality of elements in the network by comparing the different possible configurations. Furthermore, the proposed algorithm for calculating the upper estimate for packet transmission delay can reduce network maintenance costs by detecting inconsistencies between network equipment settings and requirements. The scientific novelty of these results is that it became possible to calculate the achievable upper data delay in polynomial time even in the case of arbitrary tree topologies, but not only when the network handlers are located in tandem. Doi: 10.28991/ESJ-2022-06-05-010 Full Text: PDF
Active usage of data collections by experts and decision makers tasked with preparing decision alternatives is an essential characteristic of effectiveness of an anthropotechnic system. In many cases such data analysis may require a standalone visual analysis that implies projection of a multidimensional array of data onto a lower-dimensional space. The article below presents the results of developing the theoretical foundation of such algorithm that is oriented towards an interactive analysis procedure.
In today's digital world, saturated with data flows, universal multifunctional systems are developing, capable of solving various problems related to optimizing the use of available computing resources. A distinctive feature of such systems is the heterogeneity of incoming flows of user requests due to the multifunctionality of modern information systems, expressed in supporting various multimedia services on a single platform. Data heterogeneity and large volumes of data create many problems related to the speed of digital systems and data storage security. The solutions can be found in artificial intelligence (AI) technologies, particularly machine learning. Therefore, development and implementation of digital telecommunication complexes for storing, processing, and forming a dynamic flow of multiformat data using AI technologies are becoming more relevant. This paper aims to identify trends and prospects for developing these complexes, and develop proposals on their perspective characteristics. The authors focused on review the experience of Russian organizations developing multi-object analytics systems and analyze the technical and functional characteristics of existing systems. The result of the review and analysis is a table with a comparison of the technical characteristics of existing complexes and proposals for characteristics that are promising for further implementation.
Nowadays, machine learning methods are actively used to process big data. A promising direction is neural networks, in which structure optimization occurs on the principles of self-configuration. Genetic algorithms are applied to solve this nontrivial problem. Most multicriteria evolutionary algorithms use a procedure known as non-dominant sorting to rank decisions. However, the efficiency of procedures for adding points and updating rank values in non-dominated sorting (incremental non-dominated sorting) remains low. In this regard, this research improves the performance of these algorithms, including the condition of an asynchronous calculation of the fitness of individuals. The relevance of the research is determined by the fact that although many scholars and specialists have studied the self-tuning of neural networks, they have not yet proposed a comprehensive solution to this problem. In particular, algorithms for efficient non-dominated sorting under conditions of incremental and asynchronous updates when using evolutionary methods of multicriteria optimization have not been fully developed to date. To achieve this goal, a hybrid co-evolutionary algorithm was developed that significantly outperforms all algorithms included in it, including error-back propagation and genetic algorithms that operate separately. The novelty of the obtained results lies in the fact that the developed algorithms have minimal asymptotic complexity. The practical value of the developed algorithms is associated with the fact that they make it possible to solve applied problems of increased complexity in a practically acceptable time. Doi: 10.28991/HIJ-2023-04-01-011 Full Text: PDF
The purpose of this investigation is to develop a method for quantitative assessment of the uniqueness of personal medical data (PMD) to improve their protection in medical information systems (MIS). The relevance of the goal is due to the fact that impersonal PMD can form unique combinations that are potentially of interest to intruders and threaten to reveal the patient's identity and medical confidentiality. Existing approaches were analyzed, and a new method for quantifying the degree of uniqueness of PMD was proposed. A weakness in existing approaches is the assumption that an attacker will use exact matching to identify people. The novelty of the method proposed in this paper lies in the fact that it is not limited to this hypothesis, although it has its limitations: it is not applicable to small samples. The developed method for determining the PMD uniqueness coefficient is based on the assumption of a multidimensional distribution of features, characterized by a covariance matrix, and a normal distribution, which provides the most reliable reflection of the existing relationships between features when analyzing large data samples. The results obtained in computational experiments show that efficiency is no worse than that of focus groups of specialized experts. Doi: 10.28991/HIJ-2023-04-01-09 Full Text: PDF
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.