The thrombus causing a stroke can be seen on the susceptibility weighted angiography (SWAN) magnetic resonance imaging (MRI) sequence. But it is very small and hard to detect by humans. Up to date the thrombus is identified by trained human experts. But as stroke needs quick treatment, an automatic detection of the thrombus would be useful to speed up the diagnosis of acute stroke. We propose a method for automatic thrombus detection from SWAN using three separate U-Nets which work on the axial, coronal and sagittal planes.
Video Object Segmentation (VOS) has been targeted by various fully-supervised and self-supervised approaches. While fully-supervised methods demonstrate excellent results, self-supervised ones, which do not use pixel-level ground truth, attract much attention. However, selfsupervised approaches pose a significant performance gap. Box-level annotations provide a balanced compromise between labeling effort and result quality for image segmentation but have not been exploited for the video domain. In this work, we propose a box-supervised video object segmentation proposal network, which takes advantage of intrinsic video properties. Our method incorporates object motion in the following way: first, motion is computed using a bidirectional temporal difference and a novel bounding box-guided motion compensation. Second, we introduce a novel motion-aware affinity loss that encourages the network to predict positive pixel pairs if they share similar motion and color. The proposed method outperforms the stateof-the-art self-supervised benchmark by 16.4% and 6.9% J &F score and the majority of fully supervised methods on the DAVIS and Youtube-VOS dataset without imposing network architectural specifications. We provide extensive tests and ablations on the datasets, demonstrating the robustness of our method. Code is available at https: //github.com/Tanveer81/BoxVOS.git
Bounding box supervision provides a balanced compromise between labeling effort and result quality for image segmentation. However, there exists no such work explicitly tailored for videos. Applying the image segmentation methods directly to videos produces sub-optimal solutions because they do not exploit the temporal information. In this work, we propose a box-supervised video segmentation proposal network. We take advantage of intrinsic video properties by introducing a novel box-guided motion calculation pipeline and a motion-aware affinity loss. As the motion is utilized only during training, the run-time remains fixed during inference time. We evaluate our model on Video Object Segmentation (VOS) challenge. The method outperforms the state-of-the-art self-supervised methods by 16.4% and 6.9% J&F score, and the majority of fully supervised ones on the DAVIS and Youtube-VOS dataset. Code is available at https://github.com/Tanveer81/BoxVOS.git.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.