Purpose: Intravoxel incoherent motion (IVIM) analysis has attracted the interest of the clinical community due to its close relationship with microperfusion. Nevertheless, there is no clear reference protocol for its implementation; one of the questions being which b-value distribution to use. This study aimed to stress the importance of the sampling scheme and to show that an optimized b-value distribution decreases the variance associated with IVIM parameters in the brain with respect to a regular distribution in healthy volunteers.Methods: Ten volunteers were included in this study; images were acquired on a 1.5T MR scanner. Two distributions of 16 b-values were used: one considered 'regular' due to its close association with that used in other studies, and the other considered 'optimized' according to previous studies. IVIM parameters were adjusted according to the bi-exponential model, using two-step method. Analysis was undertaken in ROI defined using in the Automated Anatomical Labeling atlas, and parameters distributions were compared in a total of 832 ROI. Results:Maps with fewer speckles were obtained with the 'optimized' distribution. Coefficients of variation did not change significantly for the estimation of the diffusion coefficient D but decreased by approximately 39% for the pseudo-diffusion coefficient estimation and by 21% for the perfusion fraction. Distributions of adjusted parameters were found significantly different in 50% of the cases for the perfusion fraction, in 80% of the cases for the pseudo-diffusion coefficient and 17% of the cases for the diffusion coefficient. Observations across brain areas show that the range of average values for IVIM parameters is smaller in the 'optimized' case. Conclusion:Using an optimized distribution, data are sampled in a way that the IVIM signal decay is better described and less variance is obtained in the fitted parameters. The increased precision gained could help to detect small variations in IVIM parameters.
Introduction: Artificial intelligence is widely used in medical field, and machine learning has been increasingly used in health care, prediction, and diagnosis and as a method of determining priority. Machine learning methods have been features of several tools in the fields of obstetrics and childcare. This present review aims to summarize the machine learning techniques to predict perinatal complications.Objective: To identify the applicability and performance of machine learning methods used to identify pregnancy complications.Methods: A total of 98 articles were obtained with the keywords “machine learning,” “deep learning,” “artificial intelligence,” and accordingly as they related to perinatal complications (“complications in pregnancy,” “pregnancy complications”) from three scientific databases: PubMed, Scopus, and Web of Science. These were managed on the Mendeley platform and classified using the PRISMA method.Results: A total of 31 articles were selected after elimination according to inclusion and exclusion criteria. The features used to predict perinatal complications were primarily electronic medical records (48%), medical images (29%), and biological markers (19%), while 4% were based on other types of features, such as sensors and fetal heart rate. The main perinatal complications considered in the application of machine learning thus far are pre-eclampsia and prematurity. In the 31 studies, a total of sixteen complications were predicted. The main precision metric used is the AUC. The machine learning methods with the best results were the prediction of prematurity from medical images using the support vector machine technique, with an accuracy of 95.7%, and the prediction of neonatal mortality with the XGBoost technique, with 99.7% accuracy.Conclusion: It is important to continue promoting this area of research and promote solutions with multicenter clinical applicability through machine learning to reduce perinatal complications. This systematic review contributes significantly to the specialized literature on artificial intelligence and women’s health.
Electric power forecasting plays a substantial role in the administration and balance of current power systems. For this reason, accurate predictions of service demands are needed to develop better programming for the generation and distribution of power and to reduce the risk of vulnerabilities in the integration of an electric power system. For the purposes of the current study, a systematic literature review was applied to identify the type of model that has the highest propensity to show precision in the context of electric power forecasting. The state-of-the-art model in accurate electric power forecasting was determined from the results reported in 257 accuracy tests from five geographic regions. Two classes of forecasting models were compared: classical statistical or mathematical (MSC) and machine learning (ML) models. Furthermore, the use of hybrid models that have made significant contributions to electric power forecasting is identified, and a case of study is applied to demonstrate its good performance when compared with traditional models. Among our main findings, we conclude that forecasting errors are minimized by reducing the time horizon, that ML models that consider various sources of exogenous variability tend to have better forecast accuracy, and finally, that the accuracy of the forecasting models has significantly increased over the last five years.
Lima is considered one of the cities with the highest air pollution in Latin America. Institutions such as DIGESA, PROTRANSPORTE and SENAMHI are in charge of permanently monitoring air quality; therefore, the air quality visualization system must manage large amounts of data of different concentrations. In this study, a spatio-temporal visualization approach was developed for the exploration of data of the PM10 concentration in Metropolitan Lima, where the spatial behavior, at different time scales, of hourly concentrations of PM10 are analyzed using basic and specialized charts. The results show that the stations located to the east side of the metropolitan area had the highest concentrations, in contrast to the stations located in the center and north that reported better air quality. According to the temporal variation, the station with the highest average of biannual and annual PM10 was the HCH station. The highest PM10 concentrations were registered in 2018, during the summer, highlighting the month of March with daily averages that reached 435 μμg/m3. During the study period, the CRB was the station that recorded the lowest concentrations and the only one that met the Environmental Quality Standard for air quality. The proposed approach exposes a sequence of steps for the elaboration of charts with increasingly specific time periods according to their relevance, and a statistical analysis, such as the dynamic temporal correlation, that allows to obtain a detailed visualization of the spatio-temporal variations of PM10 concentrations. Furthermore, it was concluded that the meteorological variables do not indicate a causal relationship with respect to PM10 levels, but rather that the concentrations of particulate material are related to the urban characteristics of each district.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.