Dementia directly influences the quality of life of a person suffering from this chronic illness. The caregivers or carers of dementia people provide critical support to them but are subject to negative health outcomes because of burden and stress. The intervention of mobile health (mHealth) has become a fast-growing assistive technology (AT) in therapeutic treatment of individuals with chronic illness. The purpose of this comprehensive study is to identify, appraise, and synthesize the existing evidence on the use of mHealth applications (apps) as a healthcare resource for people with dementia and their caregivers. A review of both peer-reviewed and full-text literature was undertaken across five (05) electronic databases for checking the articles published during the last five years (between 2014 and 2018). Out of 6195 searches yielded articles, 17 were quantified according to inclusion and exclusion criteria. The included studies distinguish between five categories, viz., (1) cognitive training and daily living, (2) screening, (3) health and safety monitoring, (4) leisure and socialization, and (5) navigation. Furthermore, two most popular commercial app stores, i.e., Google Play Store and Apple App Store, were searched for finding mHealth based dementia apps for PwD and their caregivers. Initial search generated 356 apps with thirty-five (35) meeting the defined inclusion and exclusion criteria. After shortlisting of mobile applications, it is observed that these existing apps generally addressed different dementia specific aspects overlying with the identified categories in research articles. The comprehensive study concluded that mobile health apps appear as feasible AT intervention for PwD and their carers irrespective of limited available research, but these apps have potential to provide different resources and strategies to help this community.
Abstract-Vehicle classification has emerged as a significant field of study because of its importance in variety of applications like surveillance, security system, traffic congestion avoidance and accidents prevention etc. So far numerous algorithms have been implemented for classifying vehicle. Each algorithm follows different procedures for detecting vehicles from videos. By evaluating some of the commonly used techniques we highlighted most beneficial methodology for classifying vehicles. In this paper we pointed out the working of several video based vehicle classification algorithms and compare these algorithms on the basis of different performance metrics such as classifiers, classification methodology or principles and vehicle detection ratio etc. After comparing these parameters we concluded that Hybrid Dynamic Bayesian Network (HDBN) Classification algorithm is far better than the other algorithms due to its nature of estimating the simplest features of vehicles from different videos. HDBN detects vehicles by following important stages of feature extraction, selection and classification. It extracts the rear view information of vehicles rather than other information such as distance between the wheels and height of wheel etc.
Mobile technology is very fast growing and incredible, yet there are not much technology development and improvement for Deafmute peoples. Existing mobile applications use sign language as the only option for communication with them. Before our article, no such application (app) that uses the disrupted speech of Deaf-mutes for the purpose of social connectivity exists in the mobile market. The proposed application, named as vocalizer to mute (V2M), uses automatic speech recognition (ASR) methodology to recognize the speech of Deaf-mute and convert it into a recognizable form of speech for a normal person. In this work mel frequency cepstral coefficients (MFCC) based features are extracted for each training and testing sample of Deaf-mute speech. The hidden Markov model toolkit (HTK) is used for the process of speech recognition. The application is also integrated with a 3D avatar for providing visualization support. The avatar is responsible for performing the sign language on behalf of a person with no awareness of Deaf-mute culture. The prototype application was piloted in social welfare institute for Deaf-mute children. Participants were 15 children aged between 7 and 13 years. The experimental results show the accuracy of the proposed application as 97.9%. The quantitative and qualitative analysis of results also revealed that face-to-face socialization of Deaf-mute is improved by the intervention of mobile technology. The participants also suggested that the proposed mobile application can act as a voice for them and they can socialize with friends and family by using this app.
The exponential growth of videos on the YouTube video sharing platform has attracted billions of viewers, and among them majority belongs to a young demographic. Malicious uploaders also find this platform as an opportunity to push inappropriate visual content including children-oriented videos like animated cartoons. Therefore, automatic inappropriate video content filtering is highly suggested to be integrated into social media platforms. In this study, various methods and techniques are explored for the detection and classification of child inappropriate video content. First, the proposed framework employs an ImageNet pretrained convolutional neural network (CNN) model known as EfficientNet-B7 to extract video descriptors, which are then fed to bidirectional long short-term memory (BiLSTM) network to learn effective video representations and perform multiclass video classification. An attention mechanism is also integrated after BiLSTM network to apply attention probability distribution. All models are evaluated on a manually annotated video dataset of 111,156 cartoon clips from YouTube videos that fall into three categories: safe, fantasy violence, and sexual nudity. The experimental results demonstrated that EfficientNet-BiLSTM framework (accuracy = 95.66%) performs better than attention mechanism-based EfficientNet-BiLSTM (accuracy = 95.30%). Secondly, the traditional machine learning classifiers perform relatively poor than deep learning classifiers. Overall, the combination of EfficientNet and BiLSTM with 128 hidden units in deep learning architecture yielded state-of-the-art performance in terms of f1 score (0.9267). Moreover, the performance comparison against existing state-of-the-art approaches verified the superiority of proposed framework (recall = 92.22%) in child inappropriate video content detection and classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.