State-of-art preprocessing methods for Particle Image Velocimetry (PIV) are severely challenged by time-dependent light reflections and strongly non-uniform background. In this work, a novel image preprocessing method is proposed. The method is based on the Proper Orthogonal Decomposition (POD) of the image recording sequence and exploits the different spatial and temporal coherence of background and particles. After describing the theoretical framework, the method is tested on synthetic and experimental images, and compared with well-known pre-processing techniques in terms of image quality enhancement, improvements in the PIV interrogation and computational cost. The results show that, unlike existing techniques, the proposed method is robust in the presence of significant background noise intensity, gradients, and temporal oscillations. Moreover, the computational cost is one to two orders of magnitude lower than conventional image normalization methods. A downloadable version of the preprocessing toolbox has been made available at http://seis.bris.ac.uk/~aexrt/PIVPODPreprocessing/.
We propose a novel approach to few-shot action recognition, finding temporally-corresponding frame tuples between the query and videos in the support set. Distinct from previous few-shot action recognition works, we construct class prototypes using the CrossTransformer attention mechanism to observe relevant sub-sequences of all support videos, rather than using class averages or single best matches. Video representations are formed from ordered tuples of varying numbers of frames, which allows sub-sequences of actions at different speeds and temporal offsets to be compared. Our proposed Temporal-Relational CrossTransformers achieve state-of-the-art results on both Kinetics and Something-Something V2 (SSv2), outperforming prior work on SSv2 by a wide margin (6.8%) due to the method's ability to model temporal relations. A detailed ablation showcases the importance of matching to multiple support set videos and learning higher-order relational CrossTransformers.
We present the first fully automated Sit-to-Stand or Stand-to-Sit (StS) analysis framework for long-term monitoring of patients in free-living environments using video silhouettes. Our method adopts a coarse-to-fine time localisation approach, where a deep learning classifier identifies possible StS sequences from silhouettes, and a smart peak detection stage provides fine localisation based on 3D bounding boxes. We tested our method on data from real homes of participants and monitored patients undergoing total hip or knee replacement. Our results show 94.4% overall accuracy in the coarse localisation and an error of 0.026 m/s in the speed of ascent measurement, highlighting important trends in the recuperation of patients who underwent surgery. * a.masullo@bristol.ac.uk 1 In this work, by StS we do in fact mean both 'Sit-to-Stand' and 'Stand-to-Sit', but will specify which of the two, if and when necessary.
validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.