The paper focuses on the challenges associated with deploying deep neural networks (DNNs) for the recognition of traffic objects using the camera of Android smartphones. The main objective of this research is to achieve resource-awareness, enabling efficient utilization of computational resources while maintaining high recognition accuracy. To achieve this, a methodology is proposed that leverages the Edge-to-Fog paradigm to distribute the inference workload across multiple tiers of the distributed system architecture. The evaluation was conducted using a dataset comprising real-world traffic scenarios and diverse traffic objects. The main findings of this research highlight the feasibility of deploying DNNs for traffic object recognition on resource-constrained Android smartphones. The proposed Edge-to-Fog methodology demonstrated improvements in terms of both recognition accuracy and resource utilization, and viability of both edge-only and edge-fog based approaches. Moreover, the experimental results showcased the adaptability of the system to dynamic traffic scenarios, thus ensuring real-time recognition performance even in challenging environments.