Several researchers have criticized the standards of performing and reporting empirical studies in software engineering. In order to address this problem, Jedlitschka and Pfahl have produced reporting guidelines for controlled experiments in software engineering. They pointed out that their guidelines needed evaluation. We agree that guidelines need to be evaluated before they can be widely adopted. The aim of this paper is to present the method we used to evaluate the guidelines and report the results of our evaluation exercise. We suggest our evaluation process may be of more general use if reporting guidelines for other types of empirical study are developed. We used a reading method inspired by perspective-based and checklist-based reviews to perform a theoretical evaluation of the guidelines. The perspectives used were: Researcher, Practitioner/Consultant, Meta-analyst, Replicator, Reviewer and Author. Apart from the Author perspective, the reviews were based on a set of questions derived by brainstorming. A separate review was performed for each perspective. The review using the Author perspective considered each section of the guidelines sequentially. The reviews detected 44 issues where the guidelines would benefit from amendment or clarification and 8 defects. Reporting guidelines need to specify what information goes into what section and avoid excessive duplication. The current guidelines need to be revised and then subjected to further theoretical and empirical validation. Perspective-based checklists are a useful validation method but the practitioner/consultant perspective presents difficulties
Summary The difficulty of obtaining reliable individual identification of animals has limited researcher's ability to obtain quantitative data to address important ecological, behavioral, and conservation questions. Traditional marking methods placed animals at undue risk. Machine learning approaches for identifying species through analysis of animal images has been proved to be successful. But for many questions, there needs a tool to identify not only species but also individuals. Here, we introduce a system developed specifically for automated face detection and individual identification with deep learning methods using both videos and still-framed images that can be reliably used for multiple species. The system was trained and tested with a dataset containing 102,399 images of 1,040 individuals across 41 primate species whose individual identity was known and 6,562 images of 91 individuals across four carnivore species. For primates, the system correctly identified individuals 94.1% of the time and could process 31 facial images per second.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.