Motion artifacts can have a detrimental effect on the analysis of chest CT scans, because the artifacts can mimic or obscure genuine pathological features. Localising motion artifacts in the lungs can improve diagnosis quality. The diverse appearance of artifacts requires large quantities of annotations to train a detection model, but manual annotations can be subjective, unreliable, and are labour intensive to obtain. We propose a novel method (code is available at https:// github.com/guusvanderham/artificial-motion-artifacts-for-ct) for generating artificial motion artifacts in chest CT images, based on simulated CT reconstruction. We use these artificial artifacts to train fully convolutional networks that can detect real motion artifacts in chest CT scans. We evaluate our method on scans from the public LIDC, RIDER and COVID19-CT datasets and find that it is possible to train detection models with artificially generated artifacts. Generated artifacts greatly improve performance when the availability of manually annotated scans is limited.
The dominant paradigm in spatiotemporal action detection is to classify actions using spatiotemporal features learned by 2D or 3D Convolutional Networks. We argue that several actions are characterized by their context, such as relevant objects and actors present in the video. To this end, we introduce an architecture based on selfattention and Graph Convolutional Networks in order to model contextual cues, such as actor-actor and actor-object interactions, to improve human action detection in video. We are interested in achieving this in a weakly-supervised setting, i.e. using as less annotations as possible in terms of action bounding boxes. Our model aids explainability by visualizing the learned context as an attention map, even for actions and objects unseen during training. We evaluate how well our model highlights the relevant context by introducing a quantitative metric based on recall of objects retrieved by attention maps. Our model relies on a 3D convolutional RGB stream, and does not require expensive optical flow computation. We evaluate our models on the DALY dataset, which consists of human-object interaction actions. Experimental results show that our contextualized approach outperforms a baseline action detection approach by more than 2 points in Video-mAP. Code is available at https://github.com/micts/acgcn.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.