Consider first data-based machine learning techniques. They rely on large sets of examples provided during the training stage and do not learn with equations. Dealing with a situation that do not belong to the training set variability, namely an out-of-distribution sample, can be very challenging for these techniques. Trusting them could imply being able to guarantee that the training set covers the operational domain of the system to be trained. Besides, data-based AI can lack in robustness: examples have been given of adversarial attacks in which a classifier was tricked to infer a wrong class only by changing a very small percentage of the pixels of the input image. These models often also lack explainability: it is hard to understand what is exactly learned, what phenomenon occurs through the layers of a neural network. In some cases, information on the background of a picture is used by the network in the prediction of the class of an object, or bias present in the training data will be learned by the AI model, like gender bias in recruitment processes.