Given the promising advances in the field of Assisted and Automated Driving, it is expected that the roads of the future will be populated by vehicles driven by computers, partially or fully replacing human drivers. In this scenario, the first stage of the perception-decision-actuation pipeline will likely rely on Deep Neural Networks for understanding the scene around the vehicle. Typical tasks for Deep Neural Networks are object detection and instance segmentation, tasks relying on supervised learning and annotated datasets. As one can imagine, the quality of the labelled dataset strongly affects the performance of the network, and this aspect is investigated in this paper. Annotation quality should be a primary concern in safety-critical tasks, such as Assisted and Automated Driving. This work addresses and classifies some of the mistakes found in a popular automotive dataset. Moreover, some experiments with a Deep Neural Network model were performed to test the effect of these mistakes on network predictions. A set of criteria was established to support the relabelling of the testing dataset which was compared to the original dataset.