In collaborative learning situations, monitoring is needed to maintain common progress toward shared goals. The present study aimed to analyze group-level monitoring events, as well as groups’ reactions to these events, to identify instances of adaptive regulation and maladaptive behavior. Three dimensions of monitoring events were qualitatively coded from video data: the monitoring target, valence, and phase, which provided insight into identifying critical moments during the collaborative process when regulation is needed. By looking at what kind of monitoring the groups engaged in, and how the groups progressed after the need for regulation arose, different types of adaptive regulation and maladaptive behavior were distinguished. In addition, group-level physiological state transitions in the heart rate were explored to see whether changes in regulation (adaptive regulation and maladaptive behavior) were reflected in the state transitions. Nine groups of three students each participated in a collaborative exam for an advanced high school physics course, during which video and heart rate data were collected. The results showed that on-track sequences were the most common, followed by adaptive sequences. The temporality of these sequences was examined, and four categories of group progress are described with case examples. A correlation analysis showed that physiological state transitions were positively correlated with on-track sequences. The opportunities and limitations of using three dimensions of monitoring and heart-rate based physiological state transitions to study adaptive regulation are discussed.
BackgroundMagnetic resonance fingerprinting (MRF) is a method to speed up acquisition of quantitative MRI data. However, MRF does not usually produce contrast‐weighted images that are required by radiologists, limiting reachable total scan time improvement. Contrast synthesis from MRF could significantly decrease the imaging time.PurposeTo improve clinical utility of MRF by synthesizing contrast‐weighted MR images from the quantitative data provided by MRF, using U‐nets that were trained for the synthesis task utilizing L1‐ and perceptual loss functions, and their combinations.Study TypeRetrospective.PopulationKnee joint MRI data from 184 subjects from Northern Finland 1986 Birth Cohort (ages 33–35, gender distribution not available).Field Strength and SequenceA 3 T, multislice‐MRF, proton density (PD)‐weighted 3D‐SPACE (sampling perfection with application optimized contrasts using different flip angle evolution), fat‐saturated T2‐weighted 3D‐space, water‐excited double echo steady state (DESS).AssessmentData were divided into training, validation, test, and radiologist's assessment sets in the following way: 136 subjects to training, 3 for validation, 3 for testing, and 42 for radiologist's assessment. The synthetic and target images were evaluated using 5‐point Likert scale by two musculoskeletal radiologists blinded and with quantitative error metrics.Statistical TestsFriedman's test accompanied with post hoc Wilcoxon signed‐rank test and intraclass correlation coefficient. The statistical cutoff P <0.05 adjusted by Bonferroni correction as necessary was utilized.ResultsThe networks trained in the study could synthesize conventional images with high image quality (Likert scores 3–4 on a 5‐point scale). Qualitatively, the best synthetic images were produced with combination of L1‐ and perceptual loss functions and perceptual loss alone, while L1‐loss alone led to significantly poorer image quality (Likert scores below 3). The interreader and intrareader agreement were high (0.80 and 0.92, respectively) and significant. However, quantitative image quality metrics indicated best performance for the pure L1‐loss.Data ConclusionSynthesizing high‐quality contrast‐weighted images from MRF data using deep learning is feasible. However, more studies are needed to validate the diagnostic accuracy of these synthetic images.Evidence Level4.Technical EfficacyStage 1.
In this work, we study convolutional neural network encoder-decoder architectures with pre-trained encoder weights for breast mass segmentation from digital screening mammograms. To automatically detect breast cancer, one fundamental task to achieve is the segmentation of the potential abnormal regions. Our objective was to find out whether encoder weights trained for breast cancer evaluation in comparison to those learned from natural images can yield a better model initialization, and furthermore improved segmentation results. We applied transfer learning and initialized the encoder, namely ResNet34 and ResNet22, with ImageNet weights and weights learned from breast cancer classification, respectively. A large clinically-realistic Finnish mammography screening dataset was utilized in model training and evaluation. Furthermore, an independent Portuguese INbreast dataset was utilized for further evaluation of the models. 5-fold cross-validation was applied for training. Soft Focal Tversky loss was used to calculate the model training time error. Dice score and Intersection over Union were used in quantifying the degree of similarity between the annotated and automatically produced segmentation masks. The best performing encoder-decoder with ResNet34 encoder tailed with U-Net decoder yielded Dice scores (mean±SD) of 0.7677±0.2134 for the Finnish dataset, and ResNet22 encoder tailed with U-Net decoder 0.8430±0.1091 for the INbreast dataset. No large differences in segmentation accuracy were found between the encoders initialized with weights pre-trained from breast cancer evaluation, and of those from natural image classification.
In this study, deep convolutional neural networks (DCNN) are used to synthesize contrast-weighted magnetic resonance (MR) images from quantitative parameter maps of the knee joint obtained with magnetic resonance fingerprinting (MRF). Training of the neural networks was performed using data from 142 patients, for which both standard MR images and quantitative MRF maps of the knee were available. The study demonstrates that synthesizing contrast-weighted images from MRF-parameter maps is possible utilizing DCNNs. Furthermore, the study indicates a need to tune up the dictionary used in MRF so that the parameters expected from the target anatomy are well-covered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.