Our proposal relates the component's main functions with each process characteristic (mass, minimum section thickness, draft angle, surface finish, dimensional tolerances, minimum lot and lead time) through a correlation matrix, resulting in importance indexes for these characteristics. Furthermore, the importance indices obtained are related to the capability of each casting process discussed, providing a process rating. A checklist based on DFM principles is also provided to guide the designer when a need for improvement is observed or no processes are suited for producing the desired part. For validation, two ferrous and two nonferrous cast parts were analyzed. The results were compared with other selectors described in the literature and with processes actually used in the industry. Thus, they have shown a good relation with the other methods, especially regarding the quantitative classification determined by the proposed selector.
We give conditions under which convolutional neural networks (CNNs) define valid sum-product networks (SPNs). One subclass, called convolutional SPNs (CSPNs), can be implemented using tensors, but also can suffer from being too shallow. Fortunately, tensors can be augmented while maintaining valid SPNs. This yields a larger subclass of CNNs, which we call deep convolutional SPNs (DCSPNs), where the convolutional and sum-pooling layers form rich directed acyclic graph structures. One salient feature of DCSPNs is that they are a rigorous probabilistic model. As such, they can exploit multiple kinds of probabilistic reasoning, including marginal inference and most probable explanation (MPE) inference. This allows an alternative method for learning DCSPNs using vectorized differentiable MPE, which plays a similar role to the generator in generative adversarial networks (GANs). Image sampling is yet another application demonstrating the robustness of DCSPNs. Our preliminary results on image sampling are encouraging, since the DCSPN sampled images exhibit variability. Experiments on image completion show that DCSPNs significantly outperform competing methods by achieving several state-of-the-art mean squared error (MSE) scores in both left-completion and bottom-completion in benchmark datasets.
Testing independencies is a fundamental task in reasoning with Bayesian networks (BNs). In practice, d‐separation is often used for this task, since it has linear‐time complexity. However, many have had difficulties understanding d‐separation in BNs. An equivalent method that is easier to understand, called m‐separation, transforms the problem from directed separation in BNs into classical separation in undirected graphs. Two main steps of this transformation are pruning the BN and adding undirected edges.
In this paper, we propose u‐separation as an even simpler method for testing independencies in a BN. Our approach also converts the problem into classical separation in an undirected graph. However, our method is based upon the novel concepts of inaugural variables and rationalization. Thereby, the primary advantage of u‐separation over m‐separation is that m‐separation can prune unnecessarily and add superfluous edges. Our experiment results show that u‐separation performs 73% fewer modifications on average than m‐separation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.