We present a novel bimodal optoelectronic sensor based on Fresnel lenses and the associated stereo-recording device that records the wingbeat event of an insect in flight as backscattered and extinction light. We investigate the complementary information of these two sources of biometric evidence and we finally embed part of this technology in an electronic e-trap for fruit flies. The e-trap examines the spectral content of the wingbeat of the insect flying in and reports wirelessly counts and species identity. We design our devices so that they are optimized in terms of detection accuracy and power consumption, but above all, we ensure that they are affordable. Our aim is to make more widespread the use of electronic insect traps that report in virtually real time the level of the pest population from the field straight to a human controlled agency. We have the vision to establish remote automated monitoring for all insects of economic and hygienic importance at large spatial scales, using their wingbeat as biometric evidence. To this end, we provide open access to the implementation details, recordings, and classification code we developed.
This paper examines the emotion and tone of language used by e-negotiation participants. Eight hundred e-negotiations of varying lengths were studied and significant differences between successful and unsuccessful e-negotiations were uncovered. Participants in successful e-negotiations expressed significantly more positive emotion and agreeable language, and significantly less negative language in their textual exchanges than participants in failed e-negotiations. Further, successful e-negotiations were shorter in elapsed time than unsuccessful e-negotiations. Logistic regression results indicate that use of agreeable language throughout the e-negotiation process is a significant predictor of e-negotiation success, while the use of negative language is only significant to e-negotiation success (failure) in the last half of the e-negotiation.
Recent advances have shown that clothing appearance provides important features for person re-identification and retrieval in surveillance and multimedia data. However, the regions from which such features are extracted are usually only very crudely segmented, due to the difficulty of segmenting highly articulated entities such as persons. In order to overcome the problem of unconstrained poses, we propose a segmentation approach based on a large number of part detectors. Our approach is able to separately segment a person's upper and lower clothing regions, taking into account the person's body pose. We evaluate our approach on the task of character retrieval on a new challenging data set and present promising results.
A b s t r a c t Electronic negotiations are supported by a number of technologies including e-mail, web-enabled decision support systems and enegotiation systems (ENSs). The features of the ENS used by a negotiator can affect the negotiation outcome because of the type and scope of support provided and its presentation. ENSs usually interface with users via a natural language system and/or graphical display. This paper reports the results of the effect of the provision of graphical representation on reaching agreement in bilateral negotiation using the Inspire ENS system compared to negotiations conducted using the same system without graphical representation. No difference was observed in the proportion of dyads that reached agreement with graphical representation compared to the system without graphical support. For dyads that reached agreement, participants using the system without graphical support submitted a lower number of offers. The average message size per dyad was 334 words greater, on average, for successful negotiations without graphical support, although the number of messages exchanged by the negotiators was not significantly different. The incongruence between the information presentation format and the negotiation task is thought to require more extensive textual explanation of positional and offer rationalization to compensate for the lack of graphical support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.