BackgroundFreezing of gait (FoG) is one of the most disturbing and least understood symptoms in Parkinson disease (PD). Although the majority of existing assistive systems assume accurate detections of FoG episodes, the detection itself is still an open problem. The specificity of FoG is its dependency on the context of a patient, such as the current location or activity. Knowing the patient's context might improve FoG detection. One of the main technical challenges that needs to be solved in order to start using contextual information for FoG detection is accurate estimation of the patient's position and orientation toward key elements of his or her indoor environment.ObjectiveThe objectives of this paper are to (1) present the concept of the monitoring system, based on wearable and ambient sensors, which is designed to detect FoG using the spatial context of the user, (2) establish a set of requirements for the application of position and orientation tracking in FoG detection, (3) evaluate the accuracy of the position estimation for the tracking system, and (4) evaluate two different methods for human orientation estimation.MethodsWe developed a prototype system to localize humans and track their orientation, as an important prerequisite for a context-based FoG monitoring system. To setup the system for experiments with real PD patients, the accuracy of the position and orientation tracking was assessed under laboratory conditions in 12 participants. To collect the data, the participants were asked to wear a smartphone, with and without known orientation around the waist, while walking over a predefined path in the marked area captured by two Kinect cameras with non-overlapping fields of view.ResultsWe used the root mean square error (RMSE) as the main performance measure. The vision based position tracking algorithm achieved RMSE = 0.16 m in position estimation for upright standing people. The experimental results for the proposed human orientation estimation methods demonstrated the adaptivity and robustness to changes in the smartphone attachment position, when the fusion of both vision and inertial information was used.ConclusionsThe system achieves satisfactory accuracy on indoor position tracking for the use in the FoG detection application with spatial context. The combination of inertial and vision information has the potential for correct patient heading estimation even when the inertial wearable sensor device is put into an a priori unknown position.
No abstract
TrackLab is a new tool for measurement, recognition and analysis of spatial behavior. Although a number of software packages have been developed which can, for instance, acquire tracking data or analyze that data, there is currently no one system which supports the entire workflow. TrackLab supports import from a wide variety of input formats, both real-time and offline. Furthermore a plug-in module is being developed which gives tracking data from a group of up to ten people on the basis of video images (that is, with no need for tags or similar). Once the location data is in the TrackLab software it can be visualized in a variety of ways and a statistical analysis report is generated. The analysis variables are based on established parameters for quantification of behavior based on location. The analysis helps you to gain insight into the spatial behavior of customers. For real-time applications of the system, the analysis variables can be used to control external software, for example presentation of information on a display when a person has followed a particular path through the shop.
AcknowledgementThis master thesis project is a cooperation between Utrecht University and Noldus InnovationWorks. Noldus InnovationWorks is the research and innovation laboratory of Noldus Information Technology, where novel technologies, concepts and product prototypes for behavioral research on humans and animals are researched, developed, field-tested and commercialized. The project is carried out at Noldus' headquarter located in Wageningen, The Netherlands.I would like to thank Dr. Nico van der Aa for his insightful and patient guidance. I would also like to thank Dr. Robby Tan for his supervision and suggestions throughout the whole thesis project. Elsbeth van Dam has shared her valuable work and experience on animal action classification. Many other people have been interested in this project and enthusiastically joined the discussions.i AbstractIn this thesis we apply Hidden Conditional Random Fields (HCRF) to action recognition. HCRF is a classification method modelling the structure among local observations. In our system, an image is modelled as a set of hidden part labels conditioned on their local features. For each action class, the probability of an assignment of part labels to local patch features is modelled by a Conditional Random Field (CRF). These class conditional CRFs are combined into an unified framework of HCRF, which treats the assignment of part labels as hidden variables. This model also combines the local patch features with the global feature of an image under the framework of HCRF. The model parameter is trained with a maximum likelihood criteria. We have also evaluated a baseline model of HCRF, called the root model. It only uses the global feature and it does not include the hidden part labels. The root model is trained with the maximum likelihood criteria as well.An extension of HCRF, Max-Margin Hidden Conditional Random Field (MMHCRF), has also been applied to action recognition. MMHCRF extends HCRF by training with a maximum margin criteria. That is, it sets the model parameter in the way that the margin between the score of the correct action label and the scores of the other labels is maximized. We have also evaluated a baseline model of MMHCRF. Similar to the root model, this baseline model only uses the global feature, but it trains the model parameter with the max-margin criteria.Based on HCRF and the root model, we have proposed a Part Labels method. This method learns the hidden part labels of each image using the model parameter trained by HCRF. It uses these part labels as a new set of local features and combines them with the global feature. It trains these features in the same way as the root model.We have implemented and evaluated these five models on the Weizmann dataset, a human action dataset, and an animal behaviour dataset, called Noldus ABR dataset. Our experiments show that only modelling the spatial structures in 2D space is not sufficient for action recognition. It has been demonstrated that the classification results of the simpler models such as the root...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.