Training a robust classifier and an accurate box regressor are difficult for occluded pedestrian detection. Traditionally adopted Intersection over Union (IoU) measurement does not consider the occluded region of the object and leads to improper training samples. To address such issue, a modification called visible IoU is proposed in this paper to explicitly incorporate the visible ratio in selecting samples. Then a newly designed box sign predictor is placed in parallel with box regressor to separately predict the moving direction of training samples. It leads to higher localization accuracy by introducing sign prediction loss during training and sign refining in testing. Following these novelties, we obtain state-of-the-art performance on CityPersons benchmark for occluded pedestrian detection.
Pedestrian detection in the crowd is a challenging task because of intra-class occlusion. More prior information is needed for the detector to be robust against it. Human head area is naturally a strong cue because of its stable appearance, visibility and relative location to body. Inspired by it, we adopt an extra branch to conduct semantic head detection in parallel with traditional body branch. Instead of manually labeling the head regions, we use weak annotations inferred directly from body boxes, which is named as 'semantic head'. In this way, the head detection is formulated into using a special part of labeled box to detect the corresponding part of human body, which surprisingly improves the performance and robustness to occlusion. Moreover, the head-body alignment structure is explicitly explored by introducing Alignment Loss, which functions in a self-supervised manner. Based on these, we propose the head-body alignment net (HBAN) in this work, which aims to enhance pedestrian detection by fully utilizing the human head prior. Comprehensive evaluations are conducted to demonstrate the effectiveness of HBAN on CityPersons dataset.
Objective: Poor experience with Invisalign treatment affects patient compliance and, thus, treatment outcome. Knowing the potential discomfort level in advance can help orthodontists better prepare the patient to overcome the difficult stage. This study aimed to construct artificial neural networks (ANNs) to predict patient experience in the early stages of Invisalign treatment. Methods: In total, 196 patients were enrolled. Data collection included questionnaires on pain, anxiety, and quality of life (QoL). A four-layer fully connected multilayer perception with three backpropagations was constructed to predict patient experience of the treatment. The input data comprised 17 clinical features. The partial derivative method was used to calculate the relative contributions of each input in the ANNs. Results: The predictive success rates for pain, anxiety, and QoL were 87.7%, 93.4%, and 92.4%, respectively. ANNs for predicting pain, anxiety, and QoL yielded areas under the curve of 0.963, 0.992, and 0.982, respectively. The number of teeth with lingual attachments was the most important factor affecting the outcome of negative experience, followed by the number of lingual buttons and upper incisors with attachments. Conclusions: The constructed ANNs in this preliminary study show good accuracy in predicting patient experience (i.e., pain, anxiety, and QoL) of Invisalign treatment. Artificial intelligence system developed for predicting patient comfort has potential for clinical application to enhance patient compliance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.