Keywords: questions, query responses, corpus study, KoSIn this article we consider the phenomenon of answering a query with a query. Although such answers are common, no large-scale, corpusbased characterization exists, with the exception of clarification requests. After briefly reviewing different theoretical approaches on this subject, we present a corpus study of query responses in the British National Corpus and develop a taxonomy for query responses. We identify a variety of response categories that have not been formalized in previous dialogue work, particularly those relevant to adversarial interaction. We show that different response categories have significantly different rates of subsequent answer provision. We provide a formal analysis of the response categories within the framework of KoS.
Our aim is to model the behaviour of a cognitive agent trying to solve a complex problem by dividing it into sub-problems, but failing to solve some of these sub-problems. We use the powerful framework of erotetic search scenarios (ESS) combined with Kleene's strong three-valued logic. ESS, defined on the grounds of Inferential Erotetic Logic, has appeared to be a useful logical tool for modelling cognitive goal-directed processes. Using the logical tools of ESS and the three-valued logic, we will show how an agent could solve the initial problem despite the fact that the sub-problems remain unsolved. Thus our model not only indicates missing information but also specifies the contexts in which the problem-solving process may end in success despite the lack of information. We will also show that this model of problem solving may find use in an analysis of natural language dialogues.
The Uncanny Valley Hypothesis (UVH, proposed in the 1970s) suggests that looking at or interacting with almost human-like artificial characters would trigger eeriness or discomfort. We studied how well subjects can assess degrees of human likeness for computer-generated characters. We conducted two studies, where subjects were asked to assess human likeness of given computer-generated models (Study 1) and to point the most typical model for a given category (Study 2). The results suggest that evaluation of the way human likeness is assessed should be an internal part of UVH research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.