The semantic-based 3D models retrieval systems have become necessary since the increase of 3D models databases. In this paper, we propose a new method for the mapping problem between 3D model data and semantic data involved in semantic based retrieval for 3D models given by polygonal meshes. First, we focused on extracting invariant descriptors from the 3D models and analyzing them to efficient semantic annotation and to improve the retrieval accuracy. Selected shape descriptors provide a set of terms commonly used to describe visually a set of objects using linguistic terms and are used as semantic concept to label 3D model. Second, spatial relationship representing directional, topological and distance relationships are used to derive other high-level semantic features and to avoid the problem of automatic 3D model annotation. Based on the resulting semantic annotation and spatial concepts, an ontology for 3D model retrieval is constructed and other concepts can be inferred. This ontology is used to find similar 3D models for a given query model. We adopted the query by semantic example approach, in which the annotation is performed mostly automatically. The proposed method is implemented in our 3D search engine (SB3DMR), tested using the Princeton Shape Benchmark Database.