What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI.Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging. Artificial neural networksArtificial neural networks (ANNs) is one of the most famous machine learning models, introduced already in the 1950s, and actively studied since [41, Chapter 1.2]. 11 Roughly, a neural network consists of a number of connected computational units, called neurons, arranged in layers. There's an input layer where data enters the network, followed by one or more hidden layers transforming the data as it flows through, before ending at an output layer that produces the neural network's predictions. The network is trained to output useful predictions by identifying patterns in a set of labeled training data, fed through the network while the outputs are compared with the actual labels by an objective function. During training the network's parametersthe strength of each neuron-is tuned until the patterns identified by the network result in good 10 https://www.deeplearningbook.org/ 11 The loose connection between artificial neural networks and neural networks in the brain is often mentioned, but quite over-blown considering the complexity of biological neural networks. However, there is some interesting recent work connecting neuroscience and artificial neural networks, indicating an increase in the cross-fertilization between the two fields [43,44,45].3 predictions for the training data. Once the patterns are learned, the network can be used to make predictions on new, unseen data, i.e. generalize to new data.It h...
Pre-Lie (or Vinberg) algebras arise from flat and torsion-free connections on differential manifolds. They have been extensively studied in recent years, both from algebraic operadic points of view and through numerous applications in numerical analysis, control theory, stochastic differential equations and renormalization. Butcher series are formal power series founded on pre-Lie algebras, used in numerical analysis to study geometric properties of flows on euclidean spaces. Motivated by the analysis of flows on manifolds and homogeneous spaces, we investigate algebras arising from flat connections with constant torsion, leading to the definition of post-Lie algebras, a generalization of pre-Lie algebras. Whereas pre-Lie algebras are intimately associated with euclidean geometry, post-Lie algebras occur naturally in the differential geometry of homogeneous spaces, and are also closely related to Cartan's method of moving frames. Lie-Butcher series combine Butcher series with Lie series and are used to analyze flows on manifolds. In this paper we show that Lie-Butcher series are founded on post-Lie algebras. The functorial relations between post-Lie algebras and their enveloping algebras, called D-algebras, are explored. Furthermore, we develop new formulas for computations in free post-Lie algebras and D-algebras, based on recursions in a magma, and we show that Lie-Butcher series are related to invariants of curves described by moving frames.
B-series originated from the work of John Butcher in the 1960s as a tool to analyze numerical integration of differential equations, in particular Runge-Kutta methods. Connections to renormalization theory in perturbative quantum field theory have been established in recent years. The algebraic structure of classical Runge-Kutta methods is described by the Connes-Kreimer Hopf algebra.Lie-Butcher series are generalizations of B-series that are aimed at studying Lie-group integrators for differential equations evolving on manifolds. Lie group integrators are based on general Lie group actions on a manifold, and classical Runge-Kutta integrators appear in this setting as the special case of R n acting upon itself by translations. Lie-Butcher theory combines classical B-series on R n with Lie-series on manifolds. The underlying Hopf algebra HN combines the Connes-Kreimer Hopf algebra with the shuffle Hopf algebra of free Lie algebras.Aimed at a general mathematical audience, we give an introduction to Hopf algebraic structures and their relationship to structures appearing in numerical analysis. In particular, we explore the close connection between Lie series, time-dependent Lie series and Lie-Butcher series for diffeomorphisms on manifolds. The role of the Euler and Dynkin idempotents in numerical analysis is discussed. A non-commutative version of a Faà di Bruno bialgebra is introduced, and the relation to non-commutative Bell polynomials is explored.
Butcher series are combinatorial devices used in the study of numerical methods for differential equations evolving on vector spaces. More precisely, they are formal series developments of differential operators indexed over rooted trees, and can be used to represent a large class of numerical methods. The theory of backward error analysis for differential equations has a particularly nice description when applied to methods represented by Butcher series. For the study of differential equations evolving on more general manifolds, a generalization of Butcher series has been introduced, called Lie-Butcher series. This paper presents the theory of backward error analysis for methods based on Lie-Butcher series.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.