BackgroundQuantitative areas is of great measurement of wound significance in clinical trials, wound pathological analysis, and daily patient care. 2D methods cannot solve the problems caused by human body curvatures and different camera shooting angles. Our objective is to simply collect wound areas, accurately measure wound areas and overcome the shortcomings of 2D methods.ResultsWe propose a method with 3D transformation to measure wound area on a human body surface, which combines structure from motion (SFM), least squares conformal mapping (LSCM), and image segmentation. The method captures 2D images of wound, which is surrounded by adhesive tape scale next to it, by smartphone and implements 3D reconstruction from the images based on SFM. Then it uses LSCM to unwrap the UV map of the 3D model. In the end, it utilizes image segmentation by interactive method for wound extraction and measurement. Our system yields state-of-the-art results on a dataset of 118 wounds on 54 patients, and performs with an accuracy of 0.97. The Pearson correlation, standardized regression coefficient and adjusted R square of our method are 0.999, 0.895 and 0.998 respectively.ConclusionsA smartphone is used to capture wound images, which lowers costs, lessens dependence on hardware, and avoids the risk of infection. The quantitative calculation of the 3D wound area is realized, solving the challenges that 2D methods cannot and achieving a good accuracy.
Although many 3D head pose estimation methods based on monocular vision can achieve an accuracy of 5°, how to reduce the number of required training samples and how to not to use any hardware parameters as input features are still among the biggest challenges in the field of head pose estimation. To aim at these challenges, the authors propose an accurate head pose estimation method which can act as an extension to facial key point detection systems. The basic idea is to use the normalised distance between key points as input features, and to use ℓ1‐minimisation to select a set of sparse training samples which can reflect the mapping relationship between the feature vector space and the head pose space. The linear combination of the head poses corresponding to these samples represents the head pose of the test sample. The experiment results show that the authors’ method can achieve an accuracy of 2.6° without any extra hardware parameters or information of the subject. In addition, in the case of large head movement and varying illumination, the authors’ method is still able to estimate the head pose.
The rapid progress of photorealistic synthesis techniques has reached a critical point where the boundary between real and manipulated images starts to blur. Recently, a mega-scale deep face forgery dataset, ForgeryNet which comprised of 2.9 million images and 221,247 videos has been released. It is by far the largest publicly available in terms of data-scale, manipulations (7 image-level approaches, 8 video-level approaches), perturbations (36 independent and more mixed perturbations) and annotations (6.3 million classification labels, 2.9 million manipulated area annotations and 221,247 temporal forgery segment labels). This paper reports methods and results in the ForgeryNet: Face Forgery Analysis Challenge 2021 which employs the ForgeryNet benchmark. The model evaluation is conducted offline on the private test set. A total of 186 participants registered for the competition, and 11 teams made valid submissions. We will analyze the top ranked solutions and present some discussion on future work directions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.