A neural net based implementation of propositional [0, 1]-valued multi-adjoint logic programming is presented, which is an extension of earlier work on representing logic programs in neural networks carried out in [A.S. d'Avila Garcez et al., Neural-Symbolic Learning Systems: Foundations and Applications, Springer, 2002; S. Hölldobler et al., Appl. Intelligence 11 (1) (1999) 45-58]. Proofs of preservation of semantics are given, this makes the extension to be well-founded.The implementation needs some preprocessing of the initial program to transform it into a homogeneous program; then, transformation rules carry programs into neural networks, where truth-values of rules relate to output of neurons, truth-values of facts represent input, and network functions are determined by a set of general operators; the net outputs the values of propositional variables under its minimal model.
A lot of methods have been proposed for the kinematic chain isomorphism problem. However, the tool is still needed in building intelligent systems for product design and manufacturing. In this paper, we design a novel multivalued neural network that enables a simplified formulation of the graph isomorphism problem. In order to improve the performance of the model, an additional constraint on the degree of paired vertices is imposed. The resulting discrete neural algorithm converges rapidly under any set of initial conditions and does not need parameter tuning. Simulation results show that the proposed multivalued neural network performs better than other recently presented approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.