Learning product representations that reflect complementary relationship plays a central role in e-commerce recommender system. In the absence of the product relationships graph, which existing methods rely on, there is a need to detect the complementary relationships directly from noisy and sparse customer purchase activities. Furthermore, unlike simple relationships such as similarity, complementariness is asymmetric and non-transitive. Standard usage of representation learning emphasizes on only one set of embedding, which is problematic for modelling such properties of complementariness.We propose using knowledge-aware learning with dual product embedding to solve the above challenges. We encode contextual knowledge into product representation by multi-task learning, to alleviate the sparsity issue. By explicitly modelling with user bias terms, we separate the noise of customer-specific preferences from the complementariness. Furthermore, we adopt the dual embedding framework to capture the intrinsic properties of complementariness and provide geometric interpretation motivated by the classic separating hyperplane theory. Finally, we propose a Bayesian network structure that unifies all the components, which also concludes several popular models as special cases.The proposed method compares favourably to state-of-art methods, in downstream classification and recommendation tasks. We also develop an implementation that scales efficiently to a dataset with millions of items and customers.
One key technique people use in conversation and collaboration is conversational repair. Self-repair is the recognition and attempted correction of one's own mistakes. We investigate how the self-repair of errors by intelligent voice assistants affects user interaction. In a controlled human-participant study (N =101), participants asked Amazon Alexa to perform four tasks, and we manipulated whether Alexa would "make a mistake'' understanding the participant (for example, playing heavy metal in response to a request for relaxing music) and whether Alexa would perform a correction (for example, stating, "You don't seem pleased. Did I get that wrong?'') We measured the impact of self-repair on the participant's perception of the interaction in four conditions: correction (mistakes made and repair performed), undercorrection (mistakes made, no repair performed), overcorrection (no mistakes made, but repair performed), and control (no mistakes made, and no repair performed). Subsequently, we conducted free-response interviews with each participant about their interactions. This study finds that self-repair greatly improves people's assessment of an intelligent voice assistant if a mistake has been made, but can degrade assessment if no correction is needed. However, we find that the positive impact of self-repair in the wake of an error outweighs the negative impact of overcorrection. In addition, participants who recently experienced an error saw increased value in self-repair as a feature, regardless of whether they experienced a repair themselves.
People interacting with voice assistants are often frustrated by voice assistants' frequent errors and inability to respond to backchannel cues. We introduce an open-source video dataset of 21 participants' interactions with a voice assistant, and explore the possibility of using this dataset to enable automatic error recognition to inform self-repair. The dataset includes clipped and labeled videos of participants' faces during free-form interactions with the voice assistant from the smart speaker's perspective. To validate our dataset, we emulated a machine learning classifier by asking crowdsourced workers to recognize voice assistant errors from watching soundless video clips of participants' reactions. We found trends suggesting it is possible to determine the voice assistants's performance from a participant's facial reaction alone. This work posits elicited datasets of interactive responses as a key step towards improving error recognition for repair for voice assistants in a wide variety of applications.CCS Concepts: • Human-centered computing; • Computing methodologies → Intelligent agents;
Harvester-mounted yield monitor sensors are expensive and require calibration and data cleaning. Therefore, we evaluated six vegetation indices (VI) from unmanned aerial system (Quantix™ Mapper) imagery for corn (Zea mays L.) yield prediction. A field trial was conducted with N sidedress treatments applied at four growth stages (V4, V6, V8, or V10) compared against zero-N and N-rich controls. Normalized difference vegetation index (NDVI) and enhanced vegetation index 2 (EVI2), based on flights at R4, resulted in the most accurate yield estimations, as long as sidedressing was performed before V6. Yield estimations based on earlier flights were less accurate. Estimations were most accurate when imagery from both N-rich and zero-N control plots were included, but elimination of the zero-N data only slightly reduced the accuracy. Use of a ratio approach (VITrt/VIN-rich and YieldTrt/YieldN-rich) enables the extension of findings across fields and only slightly reduced the model performance. Finally, a smaller plot size (9 or 75 m2 compared to 150 m2) resulted in a slightly reduced model performance. We concluded that accurate yield estimates can be obtained using NDVI and EVI2, as long as there is an N-rich strip in the field, sidedressing is performed prior to V6, and sensing takes place at R3 or R4.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.