SummaryHuman motion capture is often used in rehabilitation clinics for diagnostics and monitoring the effects of treatment. Traditionally, camera based systems are used. However, with these systems the measurements are restricted to a lab with expensive cameras. Motion capture outside a lab, using inertial sensors, is becoming increasingly popular to obtain insight in daily-life activity patterns.There are two main disadvantages of inertial sensor systems. Preparing the measurement system is often a complex and time consuming task. Moreover, it is prone to errors, because each sensor has to be attached to a predefined body segment. Another disadvantage is that inertial sensors cannot measure relative segment positions directly. Especially relative foot positions are very important to be estimated. Together with the center of mass, these positions can be used to assess the balance of a subject. From these two main disadvantages, the goal of this thesis was derived: Contribute to the development of a click-onand-play human motion capture system. This should be a system in which the user attaches (clicks) the sensors to the body segments and can start measuring (play) immediately. Therefore, the following sub-goals were defined. The first goal is to develop an algorithm for the automatic identification of the body segments to which inertial sensors are attached. The second goal is to develop a new sensor system, with a minimal number of sensors, for the estimation of relative foot positions and orientations and the assessment of balance during gait.The first goal is addressed in chapters 2 and 3. Chapter 2 presents a method for the automatic identification of body segments on which inertial sensors are positioned. This identification is performed on the basis of a walking trial, assuming the use of a known sensor configuration. Using this method it is possible to distinguish left and right segments. Cross correlations of signals from different measurement units were used and the features were ranked. A decision tree was used for classification of the body segments. When using a full-body configuration (17 different sensor locations), 97.5% of the sensors were correctly classified. Chapter 3 presents a method that identifies the location of a sensor, without making assumptions about the applied sensor configuration or the activity the user is performing. For a full-body configuration 83.3% of the sensor locations were correctly classified. Subsequently, for each sensor location a model was developed for activity classification, resulting in a maximum vii accuracy of 91.7%.The second goal is addressed in the chapters 4, 5 and 6. In chapter 4, ultrasound time of flight is used to estimate the distance between the feet. This system was validated using an optical reference and showed an average error in distance estimation of 7.0 mm. In chapter 5, 3D relative foot positions are estimated by fusing ultrasound and inertial sensor data measured on the shoes in an extended Kalman filter.Step lengths and step widths were ca...