Widespread adaptation of autonomous, robotic systems relies greatly on safe and reliable operation, which in many cases is derived from the ability to maintain accurate and robust perception capabilities. Environmental and operational conditions as well as improper maintenance can produce calibration errors inhibiting sensor fusion and, consequently, degrading the perception performance and overall system usability. Traditionally, sensor calibration is performed in a controlled environment with one or more known targets. Such a procedure can only be carried out in between operations and is done manually; a tedious task if it must be conducted on a regular basis. This creates an acute need for online targetless methods, capable of yielding a set of geometric transformations based on perceived environmental features. However, the often-required redundancy in sensing modalities poses further challenges, as the features captured by each sensor and their distinctiveness may vary. We present a holistic approach to performing joint calibration of a camera–lidar–radar trio in a representative autonomous driving application. Leveraging prior knowledge and physical properties of these sensing modalities together with semantic information, we propose two targetless calibration methods within a cost minimization framework: the first via direct online optimization, and the second through self-supervised learning (SSL).