Background Hospital cabins are a part and parcel of the healthcare system. Most patients admitted in hospital cabins reside in bedridden and immobile conditions. Though different kinds of systems exist to aid such patients, most of them focus on specific tasks like calling for emergencies, monitoring patient health, etc. while the patients’ limitations are ignored. Though some patient interaction systems have been developed, only singular options like touch, hand gesture or voice based interaction were provided which may not be usable for bedridden and immobile patients. Methods At first, we reviewed the existing literature to explore the prevailing healthcare and interaction systems developed for bedridden and immobile patients. Then, a requirements elicitation study was conducted through semi-structured interviews. Afterwards, design goals were established to address the requirements. Based on these goals and by using computer vision and deep learning technologies, a hospital cabin control system having multimodal interactions facility was designed and developed for hospital admitted, bedridden and immobile patients. Finally, the system was evaluated through an experiment replicated with 12 hospital admitted patients to measure its effectiveness, usability and efficiency. Results As outcomes, firstly, a set of user-requirements were identified for hospital admitted patients and healthcare practitioners. Secondly, a hospital cabin control system was designed and developed that supports multimodal interactions for bedridden and immobile hospital admitted patients which includes (a) Hand gesture based interaction for moving a cursor with hand and showing hand gesture for clicking, (b) Nose teeth based interaction where nose is used for moving a cursor and teeth is used for clicking and (c) Voice based interaction for executing tasks using specific voice commands. Finally, the evaluation results showed that the system is efficient, effective and usable to the focused users with 100% success rate, reasonable number of attempts and task completion time. Conclusion In the resultant system, Deep Learning has been incorporated to facilitate multimodal interaction for enhancing accessibility. Thus, the developed system along with its evaluation results and the identified requirements provides a promising solution for the prevailing crisis in the healthcare sector. Trial Registration Not Applicable.
Human-mobile interaction is aimed at facilitating interaction with the smartphone devices. The conventional way to interact with mobile devices is through manual input where most of the applications are made assuming that the end user has full control over their hand movements. However, this assumption excludes people who are unable to use their hands or have suffered limb damage. In this paper, we proposed a nose and teeth based interaction system, which allows the users to control their mobile devices completely hands free. The proposed system uses the front facing camera of the smartphone to track the position of the nose for cursor control on the smartphone screen. The system detects teeth for performing the touch screen events such as tap, scroll, long press, and drag. Viola-Jones algorithm is used to detect the face and teeth based on the Haar features. After detecting the face, the nose position is calculated and tracked continuously using Lucas Kanade's method for optical flow estimation. All the touch screen events have been implemented in the system so that the user can execute all the operations of the smartphone. To evaluate the performance and the effect of (smartphone) device type on the execution time, the proposed system was installed in 3 smartphone devices and 7 trials for each device were performed by 3 different able-bodied elderly persons. The result shows a significant success rate for the detection of nose and teeth, and for the execution of the operations. The execution time of each operation slightly varies by 0.72s on average because of the configuration of the smartphones.INDEX TERMS HCI, human-mobile interaction, gesture operations, disabled user, accessibility, mobile device, smartphone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.