Abstract-The particle filter offers a general numerical tool to approximate the posterior density function for the state in nonlinear and non-Gaussian filtering problems. While the particle filter is fairly easy to implement and tune, its main drawback is that it is quite computer intensive, with the computational complexity increasing quickly with the state dimension. One remedy to this problem is to marginalize out the states appearing linearly in the dynamics. The result is that one Kalman filter is associated with each particle. The main contribution in this paper is the derivation of the details for the marginalized particle filter for a general nonlinear state-space model. Several important special cases occurring in typical signal processing applications will also be discussed. The marginalized particle filter is applied to an integrated navigation system for aircraft. It is demonstrated that the complete high-dimensional system can be based on a particle filter using marginalization for all but three states. Excellent performance on real flight data is reported.
We present a model for predicting electrocardiogram (ECG) abnormalities in shortduration 12-lead ECG signals which outperformed medical doctors on the 4th year of their cardiology residency. Such exams can provide a full evaluation of heart activity and have not been studied in previous end-to-end machine learning papers. Using the database of a large telehealth network, we built a novel dataset with more than 2 million ECG tracings, orders of magnitude larger than those used in previous studies. Moreover, our dataset is more realistic, as it consist of 12-lead ECGs recorded during standard in-clinics exams. Using this data, we trained a residual neural network with 9 convolutional layers to map 7 to 10 second ECG signals to 6 classes of ECG abnormalities. Future work should extend these results to cover a large range of ECG abnormalities, which could improve the accessibility of this diagnostic tool and avoid wrong diagnosis from medical doctors. IntroductionCardiovascular diseases are the leading cause of death worldwide [1] and the electrocardiogram (ECG) is a major diagnostic tool for this group of diseases. As ECGs transitioned from analogue to digital, automated computer analysis of standard 12-lead electrocardiograms gained importance in the process of medical diagnosis [2]. However, limited performance of classical algorithms [3,4] precludes its usage as a standalone diagnostic tool and relegates it to an ancillary role [5].End-to-end deep learning has recently achieved striking success in task such as image classification [6] and speech recognition [7], and there are great expectations about how this technology may improve health care and clinical practice [8][9][10]. So far, the most successful applications used a supervised learning setup to automate diagnosis from exams. Algorithms have achieved better performance than a human specialist on their routine workflow in diagnosing breast cancer [11] and detecting certain eye conditions from eye scans [12]. While efficient, training deep neural networks using supervised learning algorithms introduces the need for large quantities of labeled data which, for medical applications, introduce several challenges, including those related to confidentiality and security of personal health information [13].Standard, short-duration 12-lead ECG is the most commonly used complementary exam for the evaluation of the heart, being employed across all clinical settings: from the primary care centers to the intensive care units. While tracing cardiac monitors and long-term monitoring, as the Holter exam, provides information mostly about cardiac rhythm and repolarization, 12-lead ECG can provide a full evaluation of heart, including arrhythmias, conduction disturbances, acute coronary syndromes, cardiac chamber hypertrophy and enlargement and even the effects of drugs and electrolyte disturbances.Machine Learning for Health (ML4H) Workshop at NeurIPS 2018.
This paper is concerned with the parameter estimation of a relatively general class of nonlinear dynamic systems. A Maximum Likelihood (ML) framework is employed, and it is illustrated how an Expectation Maximisation (EM) algorithm may be used to compute these ML estimates. An essential ingredient is the employment of so-called "particle smoothing" methods to compute required conditional expectations via a sequential Monte Carlo approach. Simulation examples demonstrate the efficacy of these techniques.
In recent years, microelectromechanical system (MEMS) inertial sensors (3D accelerometers and 3D gyroscopes) have become widely available due to their small size and low cost. Inertial sensor measurements are obtained at high sampling rates and can be integrated to obtain position and orientation information. These estimates are accurate on a short time scale, but suffer from integration drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and models. In this tutorial we focus on the signal processing aspects of position and orientation estimation using inertial sensors. We discuss different modeling choices and a selected number of important algorithms. The algorithms include optimization-based smoothing and filtering as well as computationally cheaper extended Kalman filter and complementary filter implementations. The quality of their estimates is illustrated using both experimental and simulated data.
In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.