In this paper, we apply incremental answer set solving to product configuration. Incremental answer set solving is a stepwise incremental approach to Answer Set Programming (ASP). We demonstrate how to use this technique to solve product configurations problems incrementally. Every step of the incremental solving process corresponds to a predefined configuration action. Using complex domain-specific configuration actions makes it possible to tightly control the level of non-determinism and performance of the solving process. We show applications of this technique for reasoning about product configuration, like simulating the behavior of a deterministic configuration algorithm and describing user actions.
Many complex activities of production cycles, such as quality control or fault analysis, require highly experienced specialists to perform various operations on (semi)finished products using different tools. In practical scenarios, the selection of a next operation is complicated, since each expert has only a local view on the total set of operations to be performed. As a result, decisions made by the specialists are suboptimal and might cause significant costs. In this paper, we consider a Multi-resource Partial-ordering Flexible Job-shop Scheduling (MPF-JSS) problem where partially-ordered sequences of operations must be scheduled on multiple required resources, such as tools and specialists. The resources are flexible and can perform one or more operations depending on their properties. The problem is modeled using Answer Set Programming (ASP) in which the time assignments are efficiently done using Difference Logic. Moreover, we suggest two multi-shot solving strategies aiming at the identification of the time bounds allowing for a solution of the schedule optimization problem. Experiments conducted on a set of instances extracted from a medium-sized semiconductor fault analysis lab indicate that our approach can find schedules for 87 out of 91 considered real-world instances.
Recent advances in neural-symbolic learning, such as Deep-ProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose Deep-StochLog, an alternative neural-symbolic framework based on stochastic definite clause grammars, a kind of stochastic logic program. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural-symbolic learning tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.