Machine learning algorithms often need to be trained with sample datasets, prepared by humans who label the content manually. However, data labeling is tedious and repetitive, with humans often replicating trivial decisions that could have been automated, and not getting sufficient time to devote to cases where their judgment would have added more value. Much recent research has therefore focused on approaches to labeling that optimize information gain by assisting the user with trivial cases, or directing them toward more complex ones, so that the user can devote time and attention to subtle cues and ambiguous cases, potential bias (e.g., Binns et al., 2017), or inventing new label categories (e.g., Kulesza et al., 2014). The consequence of this increasingly common strategy is that labeling interfaces become more conversational. The system is providing implicit feedback to the user about the model under development, while the user's decisions on more difficult cases are implicitly challenging the system to update the model it holds. Such a perspective shifts from the relatively predictable view of machine learning, in which facts are simply repeated until the machine retains them, to one that more closely imitates teaching and learning situations between two humans, where the knowledge being acquired is dialogically shared, probed and questioned as in a conversation. We report investigations into a particular characteristic of human conversation that has not previously been explored for its relevance to interactive machine learning-rhythmic timing in conversation. This can be contrasted on one hand to earlier models of "dialog" in HCI where the user takes the initiative, issuing commands while the system responds with information as soon as it can in order to be seen as "smooth" (Miller, 1968; Nielsen, 1993), or on the other hand to task optimization models where the system takes the initiative, prompting for information that is supplied by the user as quickly and efficiently as possible, with minimum latency (Bernstein et al., 2011). In mixed initiative interaction (Horvitz, 1999), neither of these existing design models is appropriate, and we suggest that attention to rhythm and timing becomes far more important. In design approaches where the system has the potential to complete the user's actions, or even to take the initiative and make decisions, there will be a back-and-forth flow of initiative between the user and the system, resembling participatory turn-taking in human conversation. In human conversation, 1 poorly timed participation is associated with negative or inappropriate social effects (see, e.g., Benus et al., 2011; Richardson et al., 2006). We ask: during mixed-initiative interaction such as interactive labeling, how do the rhythmic timing characteristics of the interaction influence users' experience, and how should these be designed?
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.