� The unmet needs of women with gynecologic cancers (GCs) can be readily addressed using high quality e-platforms � This pilot study documents women with GC perceptions of BELONG (https://belong.life/)-a cancer navigation and support Application (or App) connecting patients diagnosed with various types of cancers � Women (N = 25), with GCs (stages I to IV), used the App for 8 weeks and completed the user Mobile Application Rating Scale � Ratings of BELONG in domains of engagement, functionality, aesthetics, and information were high, with Ask an Oncologist, Ovarian Cancer Community, Clinical Trials, Treatment Information, and Support Resources representing the most frequently accessed topics � As e-platforms are developed at a rapid pace, users' input and evaluation of platform quality and utility should be prioritized
The SARS-CoV-2 (COVID-19) pandemic has accelerated the development and use of digital health platforms to support individuals with health-related challenges. This is even more frequent in the field of cancer care as the global burden of the disease continues to increase every year. However, optimal implementation of these platforms into the clinical setting requires careful planning and collaboration. An implementation project was launched between the Centre intégré universitaire de santé et de services sociaux (CIUSSS) du Centre-Ouest-de-I’Île-de-Montreal and BELONG—Beating Cancer Together—a person-centred cancer navigation and support digital health platform. The goal of the project was to implement content and features specific to the CIUSSS, to be made available exclusively for individuals with cancer (and their caregivers) treated at the institution. Guided by Structural Model of Interprofessional Collaboration, we report on implementation processes involving diverse stakeholders including clinicians, hospital administrators, researchers and local community/patient representatives. Lessons learned include earlier identification of shared goals and clear expectations, more consistent reliance on virtual means to communicate among all involved, and patient/caregiver involvement in each step to ensure informed and shared decision making.
In video analysis, background models have many applications such as background/foreground separation, change detection, anomaly detection, tracking, and more. However, while learning such a model in a video captured by a static camera is a fairly-solved task, in the case of a Moving-camera Background Model (MCBM), the success has been far more modest due to algorithmic and scalability challenges that arise due to the camera motion. Thus, existing MCBMs are limited in their scope and their supported camera-motion types. These hurdles also impeded the employment, in this unsupervised task, of end-to-end solutions based on deep learning (DL). Moreover, existing MCBMs usually model the background either on the domain of a typically-large panoramic image or in an online fashion. Unfortunately, the former creates several problems, including poor scalability, while the latter prevents the recognition and leveraging of cases where the camera revisits previously-seen parts of the scene. This paper proposes a new method, called DeepMCBM, that eliminates all the aforementioned issues and achieves state-of-the-art results. Concretely, first we identify the difficulties associated with joint alignment of video frames in general and in a DL setting in particular. Next, we propose a new strategy for joint alignment that lets us use a spatial transformer net with neither a regularization nor any form of specialized (and non-differentiable) initialization. Coupled with an autoencoder conditioned on unwarped robust central moments (obtained from the joint alignment), this yields an end-to-end regularization-free MCBM that supports a broad range of camera motions and scales gracefully. We demonstrate DeepMCBM's utility on a variety of videos, including ones beyond the scope of other methods. Our code is available at https://github.com/BGU-CS-VIL/DeepMCBM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.