Automated driving requires reliable perception of the environment to ensure the safety of the driving task. One common perception task is 3D object detection, which aims at perceiving location and attributes of dynamic objects. This task is typically evaluated on different benchmark datasets, which each propose different metrics. However, these different metrics generally lack consistency and bear no relation to safety. Most notably, there is a lack of consistent definitions of pass/fail criteria for any given detection metric. In this work, the issue is addressed by systematically considering safety and human performance across different aspects of the object detection task. This approach yields interpretable detection metrics as well as thresholds for pass/fail criteria. Furthermore, a validation approach leveraging a prediction network is introduced and successfully applied to the requirements. A comparison of existing detectors shows that current perception algorithms exhibit failures for a majority of objects on the nuScenes dataset. Therefore, the results indicate the necessity of explicit safety consideration in the development of perception algorithms for the automated driving task.