This work focuses on the time-predictable execution of Deep Neural Networks (DNNs) accelerated on FPGA Systemon-Chips (SoCs). The modern DPU accelerator by Xilinx is considered. An extensive profiling campaign targeting the Zynq Ultrascale+ platform has been performed to study the execution behavior of the DPU when accelerating a set of state-of-the-art DNNs for Advanced Driver Assistance Systems (ADAS). Based on the profiling, an execution model is proposed and then used to derive a response-time analysis. A custom FPGA module named DICTAT is also proposed to improve the predictability of the acceleration of DNNs and tighten the analytical bounds. A rich set of experimental results based on both analytical bounds and measurements from the target platform is finally presented to assess the effectiveness and the performance of the proposed approach on ADAS applications.