In this paper, we propose a convolutional neural network-based method to automatically retrieve missing or noisy cardiac acquisition plane information from magnetic resonance imaging and predict the five most common cardiac views. We finetune a convolutional neural network (CNN) initially trained on a large natural image recognition data-set (Imagenet ILSVRC2012) and transfer the learnt feature representations to cardiac view recognition. We contrast this approach with a previously introduced method using classification forests and an augmented set of image miniatures, with prediction using off the shelf CNN features, and with CNNs learnt from scratch. We validate this algorithm on two different cardiac studies with 200 patients and 15 healthy volunteers, respectively. We show that there is value in fine-tuning a model trained for natural images to transfer it to medical images. Our approach achieves an average F1 score of 97.66% and significantly improves the state-of-the-art of image-based cardiac view recognition. This is an important building block to organise and filter large collections of cardiac data prior to further analysis. It allows us to merge studies from multiple centres, to perform smarter image filtering, to select the most appropriate image processing algorithm, and to enhance visualisation of cardiac data-sets in content-based image retrieval.
Abstract-Goal: We present a model-based feature augmentation scheme to improve the performance of a learning algorithm for the detection of cardiac radio-frequency ablation (RFA) targets with respect to learning from images alone. Methods: Initially, we compute image features from delayed-enhanced MRI (DE-MRI) to describe local tissue heterogeneities and feed them into a machine learning framework with uncertainty assessment for the identification of potential ablation targets. Next, we introduce the use of a patient-specific image-based model derived from DE-MRI coupled with the Mitchell-Schaeffer electrophysiology model and a dipole formulation for the simulation of intracardiac electrograms (EGM). Relevant features are extracted from these simulated signals which serve as a feature augmentation scheme for the learning algorithm. We assess the classifier's performance when using only image features and with model-based feature augmentation. Results: We obtained average classification scores of 97.2% accuracy, 82.4% sensitivity and 95.0% positive predictive value (PPV) by using a model-based feature augmentation scheme. Preliminary results also show that training the algorithm on the closest patient from the database, instead of using all the patients, improves the classification results. Conclusion: We presented a feature augmentation scheme based on biophysical cardiac electrophysiology modeling to increase the prediction scores of a machine learning framework for the RFA target prediction. Significance: The results derived from this study are a proof of concept that the use of model-based feature augmentation strengthens the performance of a purely image driven learning scheme for the prediction of cardiac ablation targets.
Deep learning methods have found successful applications in fields like image classification and natural language processing. They have recently been applied to source code analysis too, due to the enormous amount of freely available source code (e.g., from open-source software repositories). In this work, we elaborate upon a state-of-the-art approach for source code representation, which uses information about its syntactic structure, and we extend it to represent source code changes (i.e., commits). We use this representation to tackle an industrial-relevant task: the classification of security-relevant commits. We leverage on transfer learning, a machine learning technique which reuses, or transfers, information learned from previous tasks (commonly called pretext tasks) to tackle a new target task. We assess the impact of using two different pretext tasks, for which abundant labeled data is available, to tackle the classification of security-relevant commits. Our results indicate that representations that exploit the structural information in code syntax outperform token-based representations. Furthermore, we show that pre-training on a small dataset (> 10 4 samples), but for a pretext task that is closely related to the target task, results in better performance metrics than pre-training on a loosely related pretext task with a very large dataset (> 10 6 samples).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.