The consequences of autonomous systems software failures can be potentially dramatic. There is no need to darken the picture, but still, it seems unlikely that people, insurance companies and certification agencies will let autonomous systems fly or drive around without requiring their makers and programmers to prove that the most critical parts of the software are robust and reliable. This is already the case for aeronautic, rail transportation, nuclear plants, medical devices, etc. were software must be certified, which possibly involve its formal validation and verification (V&V). Moreover, autonomous systems go further and embed onboard deliberation functions. This is what make them really autonomous, but open new challenges. We propose to consider the overall problem of V&V of autonomous systems software and examine the current situation with respect to the various type of software used. In particular, we point out that the availability of formal models is rather different depending on the type of component considered. We distinguish these different cases and stress the areas where we think we need to focus our efforts as to improve the overall robustness of autonomous systems.