Purpose: We aimed to develop an ovarian cancer-specific predictive framework for clinical stage, histotype, residual tumor burden, and prognosis using machine learning methods based on multiple biomarkers.Experimental Design: Overall, 334 patients with epithelial ovarian cancer (EOC) and 101 patients with benign ovarian tumors were randomly assigned to "training" and "test" cohorts. Seven supervised machine learning classifiers, including Gradient Boosting Machine (GBM), Support Vector Machine, Random Forest (RF), Conditional RF (CRF), Na€ ve Bayes, Neural Network, and Elastic Net, were used to derive diagnostic and prognostic information from 32 parameters commonly available from pretreatment peripheral blood tests and age.Results: Machine learning techniques were superior to conventional regression-based analyses in predicting multiple clinical parameters pertaining to EOC. Ensemble meth-ods combining weak decision trees, such as GBM, RF, and CRF, showed the best performance in EOC prediction. The values for the highest accuracy and area under the ROC curve (AUC) for segregating EOC from benign ovarian tumors with RF were 92.4% and 0.968, respectively. The highest accuracy and AUC for predicting clinical stages with RF were 69.0% and 0.760, respectively. High-grade serous and mucinous histotypes of EOC could be preoperatively predicted with RF. An ordinal RF classifier could distinguish complete resection from others. Unsupervised clustering analysis identified subgroups among early-stage EOC patients with significantly worse survival.Conclusions: Machine learning systems can provide critical diagnostic and prognostic prediction for patients with EOC before initial intervention, and the use of predictive algorithms may facilitate personalized treatment options through pretreatment stratification of patients.NOTE: There were too few early-stage EOC patients with residual tumor. A definition for the significance of bold is P value of < 0.05.
In 2020, Olympus Medical Systems Corporation introduced the Texture and Color Enhancement Imaging (TXI) as a new image-enhanced endoscopy. This study aimed to evaluate the visibility of neoplasms and mucosal atrophy in the upper gastrointestinal tract through TXI. We evaluated 72 and 60 images of 12 gastric neoplasms and 20 gastric atrophic/nonatrophic mucosa, respectively. The visibility of gastric mucosal atrophy and gastric neoplasm was assessed by six endoscopists using a previously reported visibility scale (1 = poor to 4 = excellent). Color differences between gastric mucosal atrophy and nonatrophic mucosa and between gastric neoplasm and adjacent areas were assessed using the International Commission on Illumination L*a*b* color space system. The visibility of mucosal atrophy and gastric neoplasm was significantly improved in TXI mode 1 compared with that in white-light imaging (WLI) (visibility score: 3.8 ± 0.5 vs. 2.8 ± 0.9, p < 0.01 for mucosal atrophy; visibility score: 2.8 ± 1.0 vs. 2.0 ± 0.9, p < 0.01 for gastric neoplasm). Regarding gastric atrophic and nonatrophic mucosae, TXI mode 1 had a significantly greater color difference than WLI (color differences: 14.2 ± 8.0 vs. 8.7 ± 4.2, respectively, p < 0.01). TXI may be a useful observation modality in the endoscopic screening of the upper gastrointestinal tract.
Shape is one of the most important traits of agricultural products due to its relationships with the quality, quantity, and value of the products. For strawberries, the nine types of fruit shape were defined and classified by humans based on the sampler patterns of the nine types. In this study, we tested the classification of strawberry shapes by machine learning in order to increase the accuracy of the classification, and we introduce the concept of computerization into this field. Four types of descriptors were extracted from the digital images of strawberries: (1) the Measured Values (MVs) including the length of the contour line, the area, the fruit length and width, and the fruit width/length ratio; (2) the Ellipse Similarity Index (ESI); (3) Elliptic Fourier Descriptors (EFDs), and (4) Chain Code Subtraction (CCS). We used these descriptors for the classification test along with the random forest approach, and eight of the nine shape types were classified with combinations of MVs&thinsp;+&thinsp;CCS&thinsp;+&thinsp;EFDs. CCS is a descriptor that adds human knowledge to the chain codes, and it showed higher robustness in classification than the other descriptors. Our results suggest machine learning's high ability to classify fruit shapes accurately. We will attempt to increase the classification accuracy and apply the machine learning methods to other plant species.
The metacognitive feelings of an "aha!" experience are key to comprehending human subjective experience. However, behavioral characteristics of this introspective cognition are not well known. An aha experience sometimes occurs when one gains a solution abruptly in problem solving, a subjective experience that subserves the conscious perception of an insight. We experimentally induced an aha experience in a hidden object recognition task, and analyzed whether this aha experience was associated with metacognitive judgments and behavioral features. We used an adaptation of Mooney images, i.e., morphing between a grayscale image and its binarised image in 100 steps, to investigate the phenomenology associated with insight: aha experience, confidence, suddenness, and pleasure. Here we show that insight solutions are more accurate than non-insight solutions. As metacognitive judgments, participants' confidence in the correctness of their solution is higher in insight than non-insight problem solving. Intensities of the aha feeling are positively correlated with subjective rating scores of both suddenness and pleasure, features that show marked signs of unexpected positive emotions. The strength of the aha experience is also positively correlated with response times from the onset of presentation until finding the solution, or with task difficulty only if the solution confidence is high enough. Our findings provide metacognitive and temporal conditions for an aha experience, characterizing features distinct from those supporting non-aha experience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.