Speaker recognition (SR) from speech can help determine the environmental context in multi-talker conversational scenarios to enable the design of context-aware multi-modal hearing assistive technology. In this paper, we argue that the use of wireless sensors such as radars can offer many benefits over conventional audio and visual sensors, such as not being afflicted by privacy and environmental issues, e.g., improper lighting, environmental noise, and potential security concerns of audio and video channels. Radar is relatively less explored and has many advantages over other contactless approaches, such as being more compact compared to RFID and having a better range and resolution than ultrasound and microwave sensors. To this end, we propose the use of ultrawideband radar coupled with a deep learning model for SR from silent speech to enable the design of future context-aware multimodal hearing assistive technology. We collected a dataset from five individuals with origins in Europe, Asia, and the United Kingdom. We obtained an average performance of approximately 82% in recognising an unknown person from a set of known people. This demonstrates that the radar has good potential to be used for privacy-preserving SR in multi-talker environments where audio-visual and other contactless techniques have limited capabilities.