a b s t r a c tThis work, inspired by the idea of ''Computing with Words and Perceptions" proposed by Zadeh in [57,59], focuses on how to transform measurements into perceptions [22] for the problem of map building by Autonomous Mobile Robots. We propose to model the perceptions obtained from sonar-sensors as two grid maps: one for obstacles and another for empty-spaces. The rules used to build and integrate these maps are expressed by linguistic descriptions and modeled by fuzzy rules. The main difference of this approach from other studies reported in the literature is that the method presented here is based on the hypothesis that the concepts ''occupied" and ''empty" are antonyms rather than complementary (as it happens in probabilistic approaches), or independent (as it happens in the previous fuzzy models).Controlled experimentation with a real robot in three representative indoor environments has been performed and the results presented. We offer a qualitative and quantitative comparison of the estimated maps obtained by the probabilistic approach, the previous fuzzy method and the new antonyms-based fuzzy approach. It is shown that the maps obtained with the antonyms-based approach are better defined, capture better the shape of the walls and of the empty-spaces, and contain less errors due to rebounds and short-echoes. Furthermore, in spite of noise and low resolution inherent to the sonar-sensors used, the maps obtained are accurate and tolerant to imprecision.
This paper focus on the problem of how to measure in a reproducible way the localization precision of a mobile robot. In particular localization algorithms that match the classic prediction-correction model are considered. We propose a performance metric based on the formalization of the error sources that affect the pose estimation error. Performance results of a localization algorithm for a real mobile robot are presented. This metric fulfils at the same time the following properties: 1) to effectively measure the estimation error of a pose estimation algorithm, 2) to be reproducible, 3) to clearly separate the contribution of the correction part from the prediction part of the algorithm, and 4) to make easy the algorithm performance analysis respect to the great number of influencing factors. The proposed metric allows the validation and evaluation of a localization algorithm in a systematic and standard way, reducing workload and design time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.