Many tactile sensors can readily detect physical contact with an object, but tactile recognition of the type of object remains challenging. In this paper, we provide evidence that data-driven thermal tactile sensing can be used to recognize contact with people and objects in real-world settings. We created a portable handheld device with three tactile sensing modalities: a heat-transfer sensor that is actively heated, a small thermally-isolated temperature sensor, and a force sensor to detect the onset of contact. Using this device, we collected data from contact with the arms of 10 people (3 locations on the right arm) and contact with 80 objects relevant to robotic assistance (8 object types in 10 residential bathrooms). We then used support vector machines (SVMs) to perform binary classifications relevant to assistive robots. When classifying contact as person vs. object, classifiers that only used the temperature sensor performed best (average accuracy of 98.75% for 3.65s of contact, 93.13% for 1.0s, and 82.13% for 0.5s). When classifying contact into two task-relevant object types (e.g., towel vs. towel rack), classifiers that used the heattransfer sensor together with the temperature sensor performed best. Performance was good when generalizing to new contact locations in the same environment (average accuracy of 92.14% for 3.65s of contact, 91.43% for 1.0s, and 84.29% for 0.5s), but weaker when generalizing to new environments (average accuracy of 84% for 3.65s of contact, 71% for 1.0s, and 65% for 0.5s).
Robots have the potential to assist people in bed, such as in healthcare settings, yet bedding materials like sheets and blankets can make observation of the human body difficult for robots. A pressure-sensing mat on a bed can provide pressure images that are relatively insensitive to bedding materials. However, prior work on estimating human pose from pressure images has been restricted to 2D pose estimates and flat beds. In this work, we present two convolutional neural networks to estimate the 3D joint positions of a person in a configurable bed from a single pressure image. The first network directly outputs 3D joint positions, while the second outputs a kinematic model that includes estimated joint angles and limb lengths. We evaluated our networks on data from 17 human participants with two bed configurations: supine and seated. Our networks achieved a mean joint position error of 77 mm when tested with data from people outside the training set, outperforming several baselines. We also present a simple mechanical model that provides insight into ambiguity associated with limbs raised off of the pressure mat, and demonstrate that Monte Carlo dropout can be used to estimate pose confidence in these situations. Finally, we provide a demonstration in which a mobile manipulator uses our network's estimated kinematic model to reach a location on a person's body in spite of the person being seated in a bed and covered by a blanket.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.