Introduction The development of reporting guidelines over the past 20 years represents a major advance in scholarly publishing with recent evidence showing positive impacts. Whilst over 350 reporting guidelines exist, there are few that are specific to surgery. Here we describe the development of the STROCSS guideline (Strengthening the Reporting of Cohort Studies in Surgery). Methods and analysis We published our protocol apriori. Current guidelines for case series (PROCESS), cohort studies (STROBE) and randomised controlled trials (CONSORT) were analysed to compile a list of items which were used as baseline material for developing a suitable checklist for surgical cohort guidelines. These were then put forward in a Delphi consensus exercise to an expert panel of 74 surgeons and academics via Google Forms. Results The Delphi exercise was completed by 62% (46/74) of the participants. All the items were passed in a single round to create a STROCSS guideline consisting of 17 items. Conclusion We present the STROCSS guideline for surgical cohort, cross-sectional and case-control studies consisting of a 17-item checklist. We hope its use will increase the transparency and reporting quality of such studies. This guideline is also suitable for cross-sectional and case control studies. We encourage authors, reviewers, journal editors and publishers to adopt these guidelines.
To evaluate all simulation models for ophthalmology technical and non-technical skills training and the strength of evidence to support their validity and effectiveness. A systematic search was performed using PubMed and Embase for studies published from inception to 01/07/2019. Studies were analysed according to the training modality: virtual reality; wet-lab; dry-lab models; e-learning. The educational impact of studies was evaluated using Messick’s validity framework and McGaghie’s model of translational outcomes for evaluating effectiveness. One hundred and thirty-one studies were included in this review, with 93 different simulators described. Fifty-three studies were based on virtual reality tools; 47 on wet-lab models; 26 on dry-lab models; 5 on e-learning. Only two studies provided evidence for all five sources of validity assessment. Models with the strongest validity evidence were the Eyesi Surgical, Eyesi Direct Ophthalmoscope and Eye Surgical Skills Assessment Test. Effectiveness ratings for simulator models were mostly limited to level 2 (contained effects) with the exception of the Sophocle vitreoretinal surgery simulator, which was shown at level 3 (downstream effects), and the Eyesi at level 5 (target effects) for cataract surgery. A wide range of models have been described but only the Eyesi has undergone comprehensive investigation. The main weakness is in the poor quality of study design, with a predominance of descriptive reports showing limited validity evidence and few studies investigating the effects of simulation training on patient outcomes. More robust research is needed to enable effective implementation of simulation tools into current training curriculums.
With the world's academic output currently standing at 2.5 million articles per year and doubling every 9 years, sifting the relevant from the irrelevant is vital for researchers, publishers, and funding bodies.Until recently, the influence of a published article would primarily be measured by its citations, a slow process resulting in a long wait before the importance of an article is truly recognized. Views of the article (including PDF and HTML) are another measure of importance, but views can also accumulate slowly. Altmetrics are increasingly recognized tools that aim to measure the real-time reach and influence of an academic article.Altmetric scores quantify the digital attention an article receives in a multitude of online sources. Social media, Wikipedia, public policy documents, blogs, and mainstream news are tracked and screened by the Altmetric database. References to research outputs are traced back to their unique identifier code. The Altmetric algorithm produces a weighted score to reflect the relative reach of each source. For instance, blogs are weighted differently than a mainstream news report. This process allows the attention an individual article receives to be measured from the moment the article is published. Altmetric scores enable potential readers to quickly filter the wealth of scientific literature that is published and to identify articles that are generating interest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.