The increasing interest for reliable generation of large scale scenes and objects has facilitated several real-time applications. Although the resolution of the new generation geometry scanners are constantly improving, the output models, are inevitably noisy, requiring sophisticated approaches that remove noise while preserving sharp features. Moreover, we no longer deal exclusively with individual shapes, but with entire scenes resulting in a sequence of 3D surfaces that are affected by noise with different characteristics due to variable environmental factors (e.g., lighting conditions, orientation of the scanning device). In this work, we introduce a novel coarse-to-fine graph spectral processing approach that exploits the fact that the sharp features reside in a low dimensional structure hidden in the noisy 3D dataset. In the coarse step, the mesh is processed in parts, using a model based Bayesian learning method that identifies the noise level in each part and the subspace where the features lie. In the feature-aware fine step, we iteratively smooth face normals and vertices, while preserving geometric features. Extensive evaluation studies carried out under a broad set of complex noise patterns verify the superiority of our approach as compared to the state-of-the-art schemes, in terms of reconstruction quality and computational complexity.
Conventional biomechanical modelling approaches involve the solution of large systems of equations that encode the complex mathematical representation of human motion and skeletal structure. To improve stability and computational speed, being a common bottleneck in current approaches, we apply machine learning to train surrogate models and to predict in near real-time, previously calculated medial and lateral knee contact forces (KCFs) of 54 young and elderly participants during treadmill walking in a speed range of 3 to 7 km/h. Predictions are obtained by fusing optical motion capture and musculoskeletal modeling-derived kinematic and force variables, into regression models using artificial neural networks (ANNs) and support vector regression (SVR). Training schemes included either data from all subjects (LeaveTrialsOut) or only from a portion of them (LeaveSubjectsOut), in combination with inclusion of ground reaction forces (GRFs) in the dataset or not. Results identify ANNs as the best-performing predictor of KCFs, both in terms of Pearson R (0.89–0.98 for LeaveTrialsOut and 0.45–0.85 for LeaveSubjectsOut) and percentage normalized root mean square error (0.67–2.35 for LeaveTrialsOut and 1.6–5.39 for LeaveSubjectsOut). When GRFs were omitted from the dataset, no substantial decrease in prediction power of both models was observed. Our findings showcase the strength of ANNs to predict simultaneously multi-component KCF during walking at different speeds—even in the absence of GRFs—particularly applicable in real-time applications that make use of knee loading conditions to guide and treat patients.
Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.