Purpose To reassess the diagnostic values of the "draft" guidelines for the clinical diagnosis of acute uncomplicated cystitis (AC), recently proposed by US Food and Drug Administration (FDA) and European Medicines Agency (EMA). Methods The data of 517 female respondents (patients with acute cystitis and controls) derived from the e-USQOLAT database were analyzed and used for the validation of proposed "draft" guidelines of FDA and EMA, compared to the Acute Cystitis Symptom Score (ACSS) questionnaire. The diagnostic values of the proposals concerning signs, symptoms and their severity were assessed and compared. Results The six "typical" symptoms of the ACSS were strongly associated with the diagnosis of AC. The number of positive "typical" symptoms differed significantly between patients and controls: median 5 (IQR 4-6) vs 1 (IQR 0-3) respectively. Scored severity of "typical" symptoms also differed significantly between groups of patients and controls: median (IQR) 10 (7-13) vs 1 (0-4), respectively. The best balance between sensitivity and specificity is shown by the ACSS cutoff value of 6 scores and more of the "Typical" domain, followed by an approach proposed by FDA and EMA, justifying ACSS to be used as a diagnostic criterion for the clinical diagnosis of AC. Conclusions Not only the presence but also the severity of the symptoms is important for an accurate diagnosis of AC. The ACSS, even without urinalysis is at least as favourable as the draft diagnostic proposals by FDA and EMA. The ACSS can be recommended for epidemiological and interventional studies, and allows women to establish self-diagnosis of AC, making the ACSS also cost-effective for healthcare.
Purpose Since symptomatic, non-antibiotic therapy has become an alternative approach to treat acute cystitis (AC) in women, suitable patient-reported outcome measures (PROM) are urgently needed. The aim of this part II of a larger noninterventional, case-control study was the additional assessment of the ACSS as a suitable PROM. Methods Data from 134 female patients with diagnosed acute uncomplicated cystitis were included in the current analysis with (1) a summary score of "Typical" domain of 6 and more; (2) at least one follow-up evaluation after the baseline visit; (3) no missing values in the ACSS questionnaire data. Six different predefined thresholds based on the scoring of the ACSS items were evaluated to define "clinical cure", also considering the draft FDA and EMA guidelines. Results Of the six different thresholds tested, a summary score of the five typical symptoms of 5 and lower with no symptom more than 1 (mild), without visible blood in urine, with or without including QoL issues was favoured, which partially also could be adapted to the draft FDA and EMA guidelines. The overall patient's clinical assessment ("Dynamic" domain) alone was not sensitive enough for a suitable PROM. Conclusions Scoring of the severity of symptoms is needed not only for diagnosis, but also for PROM to define "clinical cure" of any intervention, which could be combined with QoL issues. Results of the study demonstrated that the ACSS questionnaire has the potential to be used as a suitable PROM and should further be tested in prospective clinical studies.
Pathologic examination of prostate biopsies is time consuming due to the large number of slides per case. In this retrospective study, we validate a deep learning-based classifier for prostate cancer (PCA) detection and Gleason grading (AI tool) in biopsy samples. Five external cohorts of patients with multifocal prostate biopsy were analyzed from high-volume pathology institutes. A total of 5922 H&E sections representing 7473 biopsy cores from 423 patient cases (digitized using three scanners) were assessed concerning tumor detection. Two tumor-bearing datasets (core n = 227 and 159) were graded by an international group of pathologists including expert urologic pathologists (n = 11) to validate the Gleason grading classifier. The sensitivity, specificity, and NPV for the detection of tumor-bearing biopsies was in a range of 0.971–1.000, 0.875–0.976, and 0.988–1.000, respectively, across the different test cohorts. In several biopsy slides tumor tissue was correctly detected by the AI tool that was initially missed by pathologists. Most false positive misclassifications represented lesions suspicious for carcinoma or cancer mimickers. The quadratically weighted kappa levels for Gleason grading agreement for single pathologists was 0.62–0.80 (0.77 for AI tool) and 0.64–0.76 (0.72 for AI tool) for the two grading datasets, respectively. In cases where consensus for grading was reached among pathologists, kappa levels for AI tool were 0.903 and 0.855. The PCA detection classifier showed high accuracy for PCA detection in biopsy cases during external validation, independent of the institute and scanner used. High levels of agreement for Gleason grading were indistinguishable between experienced genitourinary pathologists and the AI tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.