To develop a deep learning-based reconstruction framework for ultrafast and robust diffusion tensor imaging and fiber tractography. Methods: SuperDTI was developed to learn the nonlinear relationship between DWIs and the corresponding diffusion tensor parameter maps. It bypasses the tensor fitting procedure, which is highly susceptible to noises and motions in DWIs. The network was trained and tested using data sets from the Human Connectome Project and patients with ischemic stroke. Results from SuperDTI were compared against widely used methods for tensor parameter estimation and fiber tracking. Results: Using training and testing data acquired using the same protocol and scanner, SuperDTI was shown to generate fractional anisotropy and mean diffusivity maps, as well as fiber tractography, from as few as six raw DWIs, with a quantification error of less than 5% in all white-matter and gray-matter regions of interest.It was robust to noises and motions in the testing data. Furthermore, the network trained using healthy volunteer data showed no apparent reduction in lesion detectability when directly applied to stroke patient data. Conclusions: Our results demonstrate the feasibility of superfast DTI and fiber tractography using deep learning with as few as six DWIs directly, bypassing tensor fitting. Such a significant reduction in scan time may allow the inclusion of DTI into the clinical routine for many potential applications.
Background: MRI acceleration using deep learning (DL) convolutional neural networks (CNNs) is a novel technique with great promise. Increasing the number of convolutional layers may allow for more accurate image reconstruction. Studies on evaluating the diagnostic interchangeability of DL reconstructed knee magnetic resonance (MR) images are scarce. The purpose of this study was to develop a deep CNN (DCNN) with an optimal number of layers for accelerating knee magnetic resonance imaging (MRI) acquisition by 6-fold and to test the diagnostic interchangeability and image quality of nonaccelerated images versus images reconstructed with a 15-layer DCNN or 3-layer CNN.Methods: For the feasibility portion of this study, 10 patients were randomly selected from the Osteoarthritis Initiative (OAI) cohort. For the interchangeability portion of the study, 40 patients were randomly selected from the OAI cohort. Three readers assessed meniscal and anterior cruciate ligament (ACL) tears and cartilage defects using DCNN, CNN, and nonaccelerated images. Image quality was subjectively graded as nondiagnostic, poor, acceptable, or excellent. Interchangeability was tested by comparing the frequency of agreement when readers used both accelerated and nonaccelerated images to frequency of agreement when readers only used nonaccelerated images. A noninferiority margin of 0.10 was used to ensure type I error ≤5% and power ≥80%. A logistic regression model using generalized estimating equations was used to compare proportions; 95% confidence intervals (CIs) were constructed.Results: DCNN and CNN images were interchangeable with nonaccelerated images for all structures, with excess disagreement values ranging from -2.5% [95% CI: (-6.1, 1.1)] to 3.0% [95% CI: (-0.1, 6.1)]. The quality of DCNN images was graded higher than that of CNN images but less than that of nonaccelerated images [excellent/acceptable quality: DCNN, 95% of cases (114/120); CNN, 60% (72/120); nonaccelerated, 97.5% (117/120)].Conclusions: Six-fold accelerated knee images reconstructed with a DL technique are diagnostically interchangeable with nonaccelerated images and have acceptable image quality when using a 15-layer CNN.
Developers create software branches for tentative feature addition and bug fixing, and periodically merge branches to release software with new features or repairing patches. When the program edits from different branches textually overlap (i.e., textual conflicts), or the co-application of those edits lead to compilation or runtime errors (i.e., compiling or dynamic conflicts), it is challenging and time-consuming for developers to eliminate merge conflicts. Prior studies examined how conflicts were related to code smells or software development process; tools were built to find and solve conflicts. However, some fundamental research questions are still not comprehensively explored, including (1) how conflicts were introduced, (2) how developers manually resolved conflicts, and ( 3) what conflicts cannot be handled by current tools.For this paper, we took a hybrid approach that combines automatic detection with manual inspection to reveal 204 merge conflicts and their resolutions in 15 open-source repositories. Our data analysis reveals three phenomena. First, compiling and dynamic conflicts are harder to detect, although current tools mainly focus on textual conflicts. Second, in the same merging context, developers usually resolved similar textual conflicts with similar strategies. Third, developers manually fixed most of the inspected compiling and dynamic conflicts by similarly editing the merged version as what they did for one of the branches. Our research reveals the challenges and opportunities for automatic detection and resolution of merge conflicts; it also sheds light on related areas like systematic program editing and change recommendation. CCS CONCEPTS• Software and its engineering → Software maintenance tools; Maintaining software; Software evolution.
In collaborative software development, programmers create software branches to add features and fix bugs tentatively, and then merge branches to integrate edits. When edits from different branches textually overlap (i.e., textual conflicts ) or lead to compilation and runtime errors (i.e., build and test conflicts ), it is challenging for developers to remove such conflicts. Prior work proposed tools to detect and solve conflicts. They investigate how conflicts relate to code smells and the software development process. However, many questions are still not fully investigated, such as what types of conflicts exist in real-world applications and how developers or tools handle them. For this paper, we used automated textual merge, compilation, and testing to reveal 3 types of conflicts in 208 open-source repositories: textual conflicts, build conflicts (i.e., conflicts causing build errors), and test conflicts (i.e., conflicts triggering test failures). We manually inspected 538 conflicts and their resolutions to characterize merge conflicts from different angles. Our analysis revealed three interesting phenomena. First, higher-order conflicts (i.e., build and test conflicts) are harder to detect and resolve, while existing tools mainly focus on textual conflicts. Second, developers manually resolved most higher-order conflicts by applying similar edits to multiple program locations; their conflict resolutions share common editing patterns implying great opportunities for future tool design. Third, developers resolved 64% of true textual conflicts by keeping complete edits from either a left or right branch. Unlike prior studies, our research for the first time thoroughly characterizes three types of conflicts, with a special focus on higher-order conflicts and limitations of existing tool design. Our work will shed light on future research of software merge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.