This paper presents our research on developing an ontology-based framework that can represent morphological characteristics related to assembly joints. Joints within the physical structure of an assembly are inevitable because of the limitations of component geometries and the associated, required engineering properties. Consequently, a framework is needed that can capture and propagate assembly design and joint information in a robust assembly model throughout the entire product development processes. The framework and model are based on an understanding of the morphological characteristics of an assembly and its different physical effects. The morphological characteristics are consequences of the principal physical processes and of the design intentions. Therefore, the morphological characteristics should be carefully represented while considering the geometry and topology of assembly joints. In this research, assembly joint topology is defined by a mereotopology, which is a region-based theory for the parts and associated concepts. This formal ontology can differentiate often ambiguous assembly and joining relations. Furthermore, the mereotopological definitions for assembly joints are implemented in Semantic Web Rule Language (SWRL) rules and Web Ontology Language triples. This process provides universality to the mereotopological definitions. Two geometrically and topologically similar joint pairs are presented to describe how the assembly joints can be defined in mereotopology and be transformed into SWRL rules. Web3D is also employed to support network-enabled sharing of assembly geometry. Finally, the proposed modeling framework is demonstrated using a real fixture assembly. This case study demonstrates the usability of the proposed framework for network-based design collaboration.
In this paper, we propose multi-layer structural wound synthesis on a 3D face. The fundamental knowledge of the facial skin is derived from the structure of tissue, being composed of epidermis, dermis and subcutis. The approach first defines the facial tissue depth map to measure details at various locations on the face. Each layer of skin in a wound image has been determined by hue-based segmentation. In addition, we have employed disparity parameters to realise 3D depth in order to make a wound model volumetric. Finally, we validate our methods using 3D wound simulation experiments.
We propose a wound recovery synthesis model that illustrates the appearance of a wound healing on a 3-dimensional (3D) face. The H3 model is used to determine the size of the recovering wound. Furthermore, we present our subsurface scattering model that is designed to take the multilayered skin structure of the wound into consideration to represent its color transformation. We also propose a novel real-time rendering method based on the results of an analysis of the characteristics of translucent materials. Finally, we validate the proposed methods with 3D wound-simulation experiments using shading models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.