Objective: Gut dysbiosis has been implicated in the pathogenesis of chronic kidney disease (CKD). Restoring gut microbiota with prebiotic, probiotic, and synbiotic supplementation has emerged as a potential therapeutic intervention but has not been systematically evaluated in the CKD population. Design and Methods: This is a systematic review. A structured search of MEDLINE, CINAHL, EMBASE, Cochrane Central Register of Controlled Trials, and the International Clinical Trials Register Search Portal was conducted for articles published since inception until July 2017. Included studies were randomized controlled trials investigating the effects of prebiotic, probiotic, and/or synbiotic supplementation (.1 week) on uremic toxins, microbiota profile, and clinical and patient-centered outcomes in adults and children with CKD. Results: Sixteen studies investigating 645 adults met the inclusion criteria; 5 investigated prebiotics, 6 probiotics, and 5 synbiotics. The quality of the studies (Grades of Recommendation, Assessment, Development and Evaluation) ranged from moderate to very low. Prebiotic, probiotic, and synbiotic supplementation may have led to little or no difference in serum urea (9 studies, 345 participants:
Background
Systematic reviews (SRs) are considered the highest level of evidence to answer research questions; however, they are time and resource intensive.
Objective
When comparing SR tasks done manually, using standard methods, versus those same SR tasks done using automated tools, (1) what is the difference in time to complete the SR task and (2) what is the impact on the error rate of the SR task?
Methods
A case study compared specific tasks done during the conduct of an SR on prebiotic, probiotic, and synbiotic supplementation in chronic kidney disease. Two participants (manual team) conducted the SR using current methods, comprising a total of 16 tasks. Another two participants (automation team) conducted the tasks where a systematic review automation (SRA) tool was available, comprising of a total of six tasks. The time taken and error rate of the six tasks that were completed by both teams were compared.
Results
The approximate time for the manual team to produce a draft of the background, methods, and results sections of the SR was 126 hours. For the six tasks in which times were compared, the manual team spent 2493 minutes (42 hours) on the tasks, compared to 708 minutes (12 hours) spent by the automation team. The manual team had a higher error rate in two of the six tasks—regarding Task 5: Run the systematic search, the manual team made eight errors versus three errors made by the automation team; regarding Task 12: Assess the risk of bias, 25 assessments differed from a reference standard for the manual team compared to 20 differences for the automation team. The manual team had a lower error rate in one of the six tasks—regarding Task 6: Deduplicate search results, the manual team removed one unique study and missed zero duplicates versus the automation team who removed two unique studies and missed seven duplicates. Error rates were similar for the two remaining compared tasks—regarding Task 7: Screen the titles and abstracts and Task 9: Screen the full text, zero relevant studies were excluded by both teams. One task could not be compared between groups—Task 8: Find the full text.
Conclusions
For the majority of SR tasks where an SRA tool was used, the time required to complete that task was reduced for novice researchers while methodological quality was maintained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.