Many recognition algorithms depend on careful positioning of an object into a canonical pose, so the position of features relative to a fixed coordinate system can be examined. Currently, this positioning is done either manually or by training a class-specialized learning algorithm with samples of the class that have been hand-labeled with parts or poses. In this paper, we describe a novel method to achieve this positioning using poorly aligned examples of a class with no additional labeling. Given a set of unaligned examplars of a class, such as faces, we automatically build an alignment mechanism, without any additional labeling of parts or poses in the data set. Using this alignment mechanism, new members of the class, such as faces resulting from a face detector, can be precisely aligned for the recognition process. Our alignment method improves performance on a face recognition task, both over unaligned images and over images aligned with a face alignment algorithm specifically developed for and trained on hand-labeled face images. We also demonstrate its use on an entirely different class of objects (cars), again without providing any information about parts or pose to the learning algorithm.
Object identification is the task of identifying specific objects belonging to the same class such as cars. We often need to recognize an object that we have only seen a few times. In fact, we often observe only one example of a particular object before we need to recognize it again. Thus we are interested in building a system which can learn to extract distinctive markers from a single example and which can then be used to identify the object in another image as "same" or "different".Previous work by Ferencz et al. introduced the notion of hyper-features, which are properties of an image patch that can be used to estimate the utility of the patch in subsequent matching tasks. In this work, we show that hyper-feature based models can be more efficiently estimated using discriminative training techniques. In particular, we describe a new hyper-feature model based upon logistic regression that shows improved performance over previously published techniques. Our approach significantly outperforms Bayesian face recognition that is considered as a standard benchmark for face recognition.
Postoperative nausea and vomiting (PONV) are frequent and distressing complications after neurosurgical procedures. We evaluated the efficacy of ondansetron and granisetron to prevent PONV after supratentorial craniotomy. In a randomized double-blind, placebo controlled trial, 90 adult American Society of Anesthesiologists I, II patients were included in the study. A standard anesthesia technique was followed. Patients were divided into 3 groups to receive either placebo (saline), ondansetron 4 mg, or granisetron 1 mg intravenously at the time of dural closure. After extubation, episodes of nausea and vomiting were noted for 24 hours postoperatively. Statistical analysis was performed using chi2 test and 1-way analysis of variance. Demographic data, duration of surgery, intraoperative fluids and analgesic requirement, and postoperative pain (visual analog scale) scores were comparable in all 3 groups. It was observed that the incidence of vomiting in 24 hours, severe emetic episodes, and requirement of rescue antiemetics were less in ondansetron and granisetron groups as compared with placebo (P<0.001). Both the study drugs had comparable effect on vomiting. However, the incidence of nausea was comparable in all 3 groups (P=0.46). A favorable influence on the patient satisfaction scores, and number needed to prevent emesis was seen in the 2 drug groups. No significant correlation was found between neurosurgical factors (presence of midline shift, mass effect, pathologic diagnosis of tumor, site of tumor) and the occurrence of PONV. We conclude that ondansetron 4 mg and granisetron 1 mg are comparably effective at preventing emesis after supratentorial craniotomy. However, neither drugs prevented nausea effectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.