PurposeThe study aim was to develop a mobile application (app) supported by user preferences to optimise self-management of arm and shoulder exercises for upper-limb dysfunction (ULD) after breast cancer treatment.MethodsFocus groups with breast cancer patients were held to identify user needs and requirements. Behaviour change techniques were explored by researchers and discussed during the focus groups. Concepts for content were identified by thematic analysis. A rapid review was conducted to inform the exercise programme. Preliminary testing was carried out to obtain user feedback from breast cancer patients who used the app for 8 weeks post surgery.ResultsBreast cancer patients’ experiences with ULD and exercise advice and routines varied widely. They identified and prioritised several app features: tailored information, video demonstrations of the exercises, push notifications, and tracking and progress features. An evidence-based programme was developed with a physiotherapist with progressive exercises for passive and active mobilisation, stretching and strengthening. The exercise demonstration videos were filmed with a breast cancer patient. Early user testing demonstrated ease of use, and clear and motivating app content.ConclusionsbWell, a novel app for arm and shoulder exercises, was developed by breast cancer patients, health care professionals and academics. Further research is warranted to confirm its clinical effectiveness.Implications for cancer survivorsMobile health has great potential to provide patients with information specific to their needs. bWell is a promising way to support breast cancer patients with exercise routines after treatment and may improve future self-management of clinical care.Electronic supplementary materialThe online version of this article (doi:10.1007/s11764-017-0630-3) contains supplementary material, which is available to authorized users.
Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment. With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions. However, tools that allow professional users to access the information they need to control the technical quality of their filming and make an informed decision about what they are recording are missing on mobile platforms. In this paper we present MAVIS (Mobile Acquisition and VISualization) a tool for professional filming on a mobile platform. MAVIS allows users to access information such as colour vectorscope, waveform monitor, false colouring, focus peaking and all other information that is needed to produce high quality professional videos. This is achieved by exploiting the capabilities of modern mobile GPUs though the use of a number of vertex and fragment shaders. Evaluation with professionals in the film industry shows that the app and its functionalities are well received and that the output and usability of the application align with professional standards.
This research concentrates on providing high fidelity animation, only achievable with offline rendering solutions, for interactive fMRI-based experiments. Virtual characters are well established within the film, game and research worlds, yet much remains to be learned about which design, stylistic or behavioural factors combine to make a believable character. The definition of believability depends on context. When designing and implementing characters for entertainment, the concern is making believable characters that the audience will engage with. When using virtual characters in experiments, the aim is to create characters and synthetic spaces that people respond to in a similar manner to their real world counterparts. Research has shown that users show empathy for virtual characters. However, uncanny valley effects -ie dips in user impressions -can arise: behavioural fidelity expectations increase alongside increases in visual fidelity and vice versa. Often, characters used within virtual environments tend to be of fairly low fidelity due to technological constraints including rendering in real-time (Garau et al. 2003). This problem is addressed here by using non-linear playback and compositing of pre-rendered high fidelity sequences.Previous research into evaluating whether virtual characters placed in immersive collaborative environments fulfil their role, is limited to acquiring ratings of pleasantness or user fidelity impressions through self-report after the experience has occurred (Vinayagamoorthy 2005). It is challenging to derive neuroscientific correlates of subjective feelings and emotions when interacting with virtual characters. The ultimate goal of this framework is to explore whether natural and artificial characters of varied fidelity engage common perceptual or neuroscientific mechanisms. Such input is non-obtrusive and is derived at the same time as the experience occurs. However, it is not straightforward to provide synthetic stimuli to be displayed in fMRI displays due to the infrastructural and technical demands. Until now such investigations utilized still pictures or non stereoscopic 3D. In order to address such requirements, the framework puts forward a sophisticated real-time compositing system which takes input from a variety of media inclusive of graphics, video or a combination. The novel computational framework also enables stereo viewing while immersed in an fMRI scanner allowing for user interactivity throughout. The system presented here is a real-time nonlinear video compositing engine designed to play back and composite video on demand. The aim is to produce the effect of real-time interactive 3D animation where, in reality, long render cycles are required in order to generate high quality images.The framework is written in Cocoa, the native application program environment for the OS X operating system, and is reliant on the Quartz graphics subsystem. Whereas the logic is written in code, all of the compositing has been developed with Quartz Composer: the graphical environment...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.