Affordance segmentation is used to split object images into parts according to the possible interactions, usually to drive safe robotic grasping. Most approaches to affordance segmentation are computationally demanding; this hinders their integration into wearable robots, whose compact structure typically offers limited processing power. The present paper describes a design strategy for tiny, deep neural networks that can accomplish affordance segmentation and deploy effectively on microcontroller-like processing units. This is attained by specialized, hardware-aware Neural Architecture Search (NAS). The method was validated by assessing the performance of several tiny networks, at different levels of complexity, on three benchmark datasets. The outcome measure was the accuracy of the generated affordance maps and the associated spatial object descriptors (orientation, center of mass, size). The experimental results confirmed that the proposed method compared satisfactorily with the state-ofthe-art approaches, yet allowing a considerable reduction in both network complexity and inference time. The proposed networks can therefore support the development of a teleceptive sensing system to improve the semi-automatic control of wearable robots for assisting grasping.