We describe a new out-of-core sparse Cholesky factorization method. The new method uses the elimination tree to partition the matrix, an advanced subtree-scheduling algorithm, and both rightlooking and left-looking updates. The implementation of the new method is efficient and robust. On a 2 GHz personal computer with 768 MB of main memory, the code can easily factor matrices with factors of up to 48 GB, usually at rates above 1 Gflop/s. For example, the code can factor AUDIKW, currenly the largest matrix in any matrix collection (factor size over 10 GB), in a little over an hour, and can factor a matrix whose graph is a 140-by-140-by-140 mesh in about 12 hours (factor size around 27 GB).
An important feature of the currently used artificial intelligence systems is their anthropomorphism. The tool of inductive empirical systems is a neural network that simulates the human brain and operates in the "black box" mode. Deductive analytical systems for representation of knowledge use transparent formalized models and algorithms, for example, algorithms of logical inference. They solve many intellectual problems, the solution of which can do without a "deep" anthropomorphic AI. On the other hand, the solution of these problems leads to the formation of alternative artificial intelligence systems. We propose the formation of artificial intelligence systems based on the following principles: exclusion of black box technologies; domination of data conversion systems: the use of direct mathematical modeling. The base of the system is a simulator-a module that simulates a given object. The ontological module selectively extracts structured sets of functional links from the simulator and fills them with corresponding data sets. The final (custom) representation of knowledge is carried out with the help of special interfaces. The concept of simulation-ontological artificial intelligence, based on the principles outlined above, is implemented in the form of parametric analysis in the configuration space and forms the methodological basis of the AI-platform for e-learning.
The influence of the automation of content creation on the development trends of e-education based on artificial intelligence is considered. Widely used content generators do not actually create new content but modify the finished content accumulated in the databases. The concept of generating primary content is based on the use of simulation models of the studied objects. The methodology for generating the content initial-generation demonstrates the possibility of incorporating AI-based content management sys- tems into e-education.
The deterministic AI system under review is an alternative to neural-network-based machine learning. In its application fields, which are science, technology, engineering, and business, the implementation of rule-based AI systems leads to benefits such as accuracy and correctness of design, and personalization of the process itself and the results. An algorithmic AI suite is based on design and logical imitation models alone, without creating and/or using Big Data and knowledge bases. Excessive complexity of configuration and high design resource capacity, which are inherent in deterministic systems, are balanced by a special methodology. A hierarchical modeling approach gives a quasi-dynamic network effect, symmetric to the analogous effect in neural networks. System performance is improved by deterministic reference training capable of modifying imitation models in online interaction with users. Such training, which serves as an alternative to neural machine learning, can be implemented by means of experimental partially empirical algorithms and system–user dialogues to build reference model libraries (portfolios). Partially empirical algorithms based on experimental design methods and system user dialogues are used to create reference model libraries (portfolios) that form a deterministic training system, which can be an alternative to neural machine learning. Estimated resources can be saved by using modified optimization techniques and by controlling the computational complexity of the algorithms. Since the proposed system in the considered layout has no analogues, and the relevant research and practical knowledge are extremely limited, special methods are required to implement this project. A gradual, phased implementation process involves the step-by-step formation of sets of algorithms with verification tests at each stage. Each test is performed using an iteration method, and each test includes test, tweak, and modification cycles. Final testing should lead to the development of an AI algorithm package, including related methodological and working papers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.