The article discusses the solution of the spatial traveling salesman problem (TSP 3D variation) using Ant Colony Optimization (ACO). The traveling salesman problem considers n bridges and a matrix of pairwise distances between them. It is necessary to find such an order of visiting cities so that the total distance traveled was minimal, each city was visited exactly once and the salesman returned to the city from which he began his route. In the TSP 3D variation problem, each “city” has 3 coordinates x, y, z. The analysis of the main methods of solving, in particular, the metaheuristic algorithms to which ACO belongs, is performed. At each iteration of these methods, a new solution of the problem is built, which is based not on one, but several solutions of the population. The ACO uses an idea that is based on collecting statistical information about the best solutions. The program code is implemented in MATLAB. During computational experiments, various network topologies were randomly generated, and the number of iterations at which the optimal cycle was achieved was recorded. The execution time of the code for the TSP 3D task is almost the same as the execution time of TSP 2D. The results can be used for spatial tasks of the salesman (TSP 3D-variation), which arise in the process of 3D printing, planning UAV trajectories (UAV) in mountain conditions or multi-story urban development, road planning in multi-story buildings.
In the process of self-assessment and accreditation examination, assessment is carried out according to a scale that covers four levels of compliance with the quality criteria of the educational program and educational activities. Assessing the quality of education is complicated by the fact that the value of quality criteria is due to a large number of factors, possibly with an unknown nature of influence, as well as the fact that when conducting pedagogical measurements it is necessary to work with non-numerical information. To solve these problems, the authors proposed a method for assessing the quality of educational programs and educational activities based on the adaptive neuro-fuzzy input system (ANFIS), implemented in the package Fuzzy Logic Toolbox system MATLAB and artificial neural network direct propagation with one output and multiple inputs. As input variables of the system ANFIS used criteria for evaluating the educational program. The initial variable of the system formed a total indicator of the quality of the curriculum and educational activities according to a certain criterion or group of criteria. The article considers a neural network that can provide a forecast for assessing the quality of educational programs and educational activities by experts. The training of the artificial neural network was carried out based on survey data of students and graduates of higher education institutions. Before the accreditation examination, students were offered questionnaires with a proposal to assess the quality of the educational program and educational activities of the specialty on an assessment scale covering four levels. Student assessments were used to form the vector of artificial neural network inputs. It was assumed that if the assessments of students and graduates are sorted by increasing the rating based on determining the average grade point average, the artificial neural network, which was taught based on this organized data set, can provide effective forecasts of accreditation examinations. As a result of comparing the initial data of the neural network with the estimates of experts, it was found that the neural network does make predictions quite close to reality.
The purpose of the article is to develop and verify with the help of mathematical modeling a software method of deploying a fault-tolerant computing cluster with a virtual machine, which consists of two physical servers (main and backup), on which a distributed data storage system with synchronous data replication from the source server to the backup server is deployed. For this purpose, the task is to conduct a computational experiment on a model of a fault-tolerant cluster, which neglects costs during recovery for the migration of virtual machines by means of the mathematical application Mathcad. Combining computing resources into clusters is a way to ensure high reliability, fault tolerance, and continuity of the computing process of computer systems. This is achieved through virtualization, which enables the movement of virtual resources, services, or applications between physical servers while maintaining the continuity of computing processes. The focus of this study is on a failover cluster, which is composed of two physical servers (primary and backup) connected through a switch, and each server has a local hard disk. A distributed storage system with synchronous data replication from the source server to the backup server is deployed on the local disks of the servers, and a virtual machine is running on the cluster. Markovian processes, flows of podias, and Kolmogorov's systems of differential equations are built into the mathematical tools of the model of a water cluster. To ensure the continuity of the computing process in case of a failure of the main server, a shadow copy of the virtual machine is launched on the backup server. The reliability of the failover cluster is measured by the coefficient of non-stationary readiness. A Markov model is proposed to assess the reliability of the failover cluster, taking into account the costs of migrating virtual machines and mechanisms that ensure the continuity of the computing process in the cluster in case of a failure of one physical server. The memory migration process maintains two copies of the virtual machine on different physical servers, enabling them to continue working on the other in the event of failure. A simplified model of the failover cluster neglects the cost of migrating virtual machines and provides an upper estimate of reliability. The study shows that the reliability of a failover cluster, as measured by the non-stationary availability factor, is significantly impacted by the virtual machine migration process. The findings of this study can be used to inform decisions about the technology chosen to ensure the failure stability and continuity of the computing process of computer systems with cluster architecture. The calculations allow us to draw a conclusion about the significant impact of virtual machine migration accounting on reliability. The calculations allow us to draw a conclusion about the significant impact of virtual machine migration accounting on reliability. The calculation was performed under the following failure rates of the server, disk, and switch: λ0 = 1,115×10-5 1/h, λ1 = 3,425×10-6 1/h, λ2 = 2,3×10-6 1/h recovery respectively: μ0 = 0,33 1/h, μ1 = 0,171/h, μ2 = 0,33 1/h. The intensity of synchronization of the distributed storage system: μ3 = 1 1/h, μ4 = 2 1/h. The difference of non-stationary cluster availability coefficients is d = К2(t) – К1(t) = 2.7×10-10
The development and efficient application of Fog Computing technologies necessitate complex tasks associated with the management and processing of large data sets, including the creation of low-level networks that guarantee the functioning of end devices within the Internet of Things (IoT) concept. This article presents the utilization of graph theory techniques to address these issues. The proposed graph model enables the determination of fundamental characteristics of systems, networks, and network devices in Fog Computing, including optimal features and methods to maintain them in a functioning condition. This work demonstrates how to create and personalize graph displays by adding labels or highlighting to the graph nodes and edges of pseudo-random task graphs. The task graphs, described and visualized in Matlab code, represent the computational work to perform and data transfer between tasks, expressed in Megacycles per second and kilobits/kilobytes of data, respectively. The task graphs can be applied in both single-user systems, where one mobile device accesses a remote server, and multi-user systems, where many users access a remote server through a wireless channel. This set can be utilized by researchers to evaluate cloud/fog/edge computing systems and computational offloading algorithms.
The rapid development of information technology, robotics, nanotechnology, and biotechnology requires modern education to train highly qualified specialists who can support it, preparing students and students for producing creative work. The need to reform education to modern challenges is an urgent problem today. It is predicted that the most popular professions soon will be programmers, engineers, roboticists, nanotechnologists, biotechnologists, IT specialists, etc. STEM education can combine these areas into a complex, which can be implemented in different age groups. One example of the use of STEM technologies is the development and implementation of scientific and technical projects using the Arduino hardware and software complex. With the help of STEM technologies, a method for calibrating an NTC thermistor in the operating temperature range is proposed and a working model of an electronic thermometer is presented using the example of an NTC thermistor and an Arduino microcontroller.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.