For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.
Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.
The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2–30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.
In this paper, we describe and validate the EquiMoves system, which aims to support equine veterinarians in assessing lameness and gait performance in horses. The system works by capturing horse motion from up to eight synchronized wireless inertial measurement units. It can be used in various equine gait modes, and analyzes both upper-body and limb movements. The validation against an optical motion capture system is based on a Bland–Altman analysis that illustrates the agreement between the two systems. The sagittal kinematic results (protraction, retraction, and sagittal range of motion) show limits of agreement of ±2.3 degrees and an absolute bias of 0.3 degrees in the worst case. The coronal kinematic results (adduction, abduction, and coronal range of motion) show limits of agreement of −8.8 and 8.1 degrees, and an absolute bias of 0.4 degrees in the worst case. The worse coronal kinematic results are most likely caused by the optical system setup (depth perception difficulty and suboptimal marker placement). The upper-body symmetry results show no significant bias in the agreement between the two systems; in most cases, the agreement is within ±5 mm. On a trial-level basis, the limits of agreement for withers and sacrum are within ±2 mm, meaning that the system can properly quantify motion asymmetry. Overall, the bias for all symmetry-related results is less than 1 mm, which is important for reproducibility and further comparison to other systems.
Recently, there has been a growing interest in the research community about using wrist-worn devices, such as smartwatches for human activity recognition, since these devices are equipped with various sensors such as an accelerometer and a gyroscope. Similarly, smartphones are already being used for activity recognition. In this paper, we study the fusion of a wristworn device (smartwatch) and a smartphone for human activity recognition. We evaluate these two devices for their strengths and weaknesses in recognizing various daily physical activities. We use three classifiers to recognize 13 different activities, such as smoking, cannot be recognized with a smartphone in the pocket position alone. We show that the combination of a smartwatch and a smartphone recognizes such activities with a reasonable accuracy. The recognition of such complex activities can enable well-being applications for detecting bad habits, such as smoking, missing a meal, and drinking too much coffee. We also show how to fuse information from these devices in an energy-efficient way by using low sampling rates. We make our dataset publicly available in order to make our work reproducible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.