Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https:
cientific experiments and robotic competitions share some common traits that can put the debate about developing better experimental methodologies and replicability of results in robotics research on more solid ground. In this context, the Robot Competitions Kick Innovation in Cognitive Systems and Robotics (RoCKIn) project aims to develop competitions that come close to scientific experiments, providing an objective performance evaluation of robot systems under controlled and replicable conditions. In this article, by further articulating replicability into reproducibility and repeatability and by considering some results from the 2014 first RoCKIn competition, we show that the RoCKIn approach offers tools that enable the replicability of experimental results. Robotic Competitions and ChallengesWithin the debate about the development of rigorous experimental methodologies in robotics research, the robotic competitions have emerged as a way to promote comparison of different algorithms Competitions for Benchmarking Task and Functionality Scoring CompletePerformance Assessment
The goal of this work is to describe how robots interact with complex city environments, and to identify the main characteristics of an emerging field that we call Robot-City Interaction (RCI). Given the central role recently gained by modern cities as use cases for the deployment of advanced technologies, and the advancements achieved in the robotics field in recent years, we assume that there is an increasing interest both in integrating robots in urban ecosystems, and in studying how they can interact and benefit from each others. Therefore, our challenge becomes to verify the emergence of such area, to assess its current state and to identify the main characteristics, core themes and research challenges associated with it. This is achieved by reviewing a preliminary body of work contributing to this area, which we classify and analyze according to an analytical framework including a set of key dimensions for the area of RCI. Such review not only serves as a preliminary state-of-the-art in the area, but also allows us to identify the main characteristics of RCI and its research landscape.
Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations. To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three subtasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zeroshot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the Vi-sualGenome scene graphs associated with the GuessWhat?! images with abstract and situated attributes. By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06%).
Human Robot Interaction is a key enabling feature to support the introduction of robots in everyday environments. However, robots are currently incapable of building representations of the environments that allow both for the execution of complex tasks and for an easy interaction with the user requesting them. In this paper, we focus on semantic mapping, namely the problem of building a representation of the environment that combines metric and symbolic information about the elements of the environment and the objects therein. Specifically, we extend previous approaches, by enabling on-line semantic mapping, that permits to add to the representation elements acquired through a long term interaction with the user. The proposed approach has been experimentally validated on different kinds of environments, several users, and multiple robotic platforms. © 2013 IEEE
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.