In direct time-of-flight (D-TOF) light detection and ranging (LIDAR), accuracy and full-scale range (FSR) are the main performance parameters to consider. Particularly, in single-photon avalanche diodes (SPAD) based systems, the photon-counting statistics plays a fundamental role in determining the LIDAR performance. Also, the intrinsic performance ultimately depends on the system parameters and constraints, which are set by the application. However, the best-achievable performance directly depends on the selected depth estimation method and is not necessarily equal to intrinsic performance.
We evaluate a D-TOF LIDAR system, in the particular context of smartphone applications, in terms of parameter trade-offs and estimation efficiency. First, we develop a simulation model by combining radiometry and photon-counting statistics. Next, we perform a trade-off analysis to study dependencies between system parameters and application constraints, as well as non-linearities caused by the detection method. Further, we derive an analytical model to calculate the Cram\'er\textendash Rao lower bound (CRLB) of the LIDAR system, which analytically accounts for the shot noise. Finally, we evaluate a depth estimation method based on artificial intelligence (AI) and compare its performance to the CRLB. We demonstrate that the AI-based estimator fully compensates the non-linearity in depth estimation, which varies depending on application conditions such as target reflectivity.