Aim: The need for predictive and prognostic biomarkers in colorectal carcinoma (CRC) brought us to an era where the use of artificial intelligence (AI) models is increasing. We investigated the expression of Claudin-7, a tight junction component, which plays a crucial role in maintaining the integrity of normal epithelial mucosa, and its potential prognostic role in advanced CRCs, by drawing a parallel between statistical and AI algorithms. Methods: Claudin-7 immunohistochemical expression was evaluated in the tumor core and invasion front of CRCs from 84 patients and correlated with clinicopathological parameters and survival. The results were compared with those obtained by using various AI algorithms. Results: the Kaplan–Meier univariate survival analysis showed a significant correlation between survival and Claudin-7 intensity in the invasive front (p = 0.00), a higher expression being associated with a worse prognosis, while Claudin-7 intensity in the tumor core had no impact on survival. In contrast, AI models could not predict the same outcome on survival. Conclusion: The study showed through statistical means that the immunohistochemical overexpression of Claudin-7 in the tumor invasive front may represent a poor prognostic factor in advanced stages of CRCs, contrary to AI models which could not predict the same outcome, probably because of the small number of patients included in our cohort.
Background. Retraction of problematic scientific articles after publication is one of the mechanisms for correcting the literature available to publishers. The market volume and the business model justify publishers’ ethical involvement in the post-publication quality control (PPQC) of human-health-related articles. The limited information about this subject led us to analyze PubMed-retracted articles and the main retraction reasons grouped by publisher. We propose a score to appraise publisher’s PPQC results. The dataset used for this article consists of 4844 PubMed-retracted papers published between 1.01.2009 and 31.12.2020. Methods. An SDTP score was constructed from the dataset. The calculation formula includes several parameters: speed (article exposure time (ET)), detection rate (percentage of articles whose retraction is initiated by the editor/publisher/institution without the authors’ participation), transparency (percentage of retracted articles available online and the clarity of the retraction notes), and precision (mention of authors’ responsibility and percentage of retractions for reasons other than editorial errors). Results. The 4844 retracted articles were published in 1767 journals by 366 publishers, the average number of retracted articles/journal being 2.74. Forty-five publishers have more than 10 retracted articles, holding 88% of all papers and 79% of journals. Combining our data with data from another study shows that less than 7% of PubMed dataset journals retracted at least one article. Only 10.5% of the retraction notes included the individual responsibility of the authors. Nine of the top 11 publishers had the largest number of retracted articles in 2020. Retraction-reason analysis shows considerable differences between publishers concerning the articles’ ET: median values between 9 and 43 months (mistakes), 9 and 73 months (images), and 10 and 42 months (plagiarism and overlap). The SDTP score shows, from 2018 to 2020, an improvement in PPQC of four publishers in the top 11 and a decrease in the gap between 1st and 11th place. The group of the other 355 publishers also has a positive evolution of the SDTP score. Conclusions. Publishers have to get involved actively and measurably in the post-publication evaluation of scientific products. The introduction of reporting standards for retraction notes and replicable indicators for quantifying publishing QC can help increase the overall quality of scientific literature.
Background: Withdrawal of problematic scientific articles after publication is one of the mechanisms for correcting the literature available to publishers, especially in the conditions of the ever-increasing trend of publishing activity in the medical field. The market volume and the business model justify publishers' involvement in the post-publication quality control (QC) of scientific production. The limited information about this subject determined us to analyze retractions and the main retraction reasons for publishers with many withdrawn articles. We also propose a score to measure the evolution of their performance. The data set used for this article consists of 4844 PubMed retracted papers published between 1.01.2009 and 31.12.2020. Methods: We have analyzed the retraction notes and retraction reasons, grouping them by publisher. To evaluate performance, we formulated an SDTP score whose calculation formula includes several parameters: speed (article exposure time (ET)), detection rate (percentage of articles whose retraction is initiated by the editor/publisher/institution without the authors' participation), transparency (percentage of retracted articles available online and clarity of retraction notes), precision (mention of authors' responsibility and percentage of retractions for reasons other than editorial errors). Results: The 4844 withdrawn articles were published in 1767 journals by 366 publishers, the average number of withdrawn articles/journal being 2.74. Forty-five publishers have more than ten withdrawn articles, holding 88% of all papers and 79% of journals. Combining our data with data from another study shows that less than 7% of PubMed journals withdrew at least one article. Only 10.5% of the withdrawal notes included the individual responsibility of the authors. Nine of the top 11 publishers had the largest number of articles withdrawn in 2020, in the first 11 places finding, as expected, some big publishers. Retraction reasons analysis shows considerable differences between publishers concerning the articles ET: median values between 9 and 43 months (mistakes), 9 and 73 months (images), 10 and 42 months (plagiarism & overlap). The SDTP score shows, between 2018 and 2020, an improvement in QC of four publishers in the top 11 and a decrease in the gap between 1st and 11th place. The group of the other 355 publishers also has a positive evolution of the SDTP score. Conclusions: Publishers have to get involved actively and measurably in the post-publication evaluation of scientific products. The introduction of reporting standards for retraction notes and replicable indicators for quantifying publishing QC can help increase the overall quality of scientific literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.