Object classification is crucial for autonomous vehicle navigation, enabling robust perception of the surrounding environment. This paper proposes an innovative method to enhance object classification accuracy for autonomous vehicles by fusing depth estimates from monocular cameras with conventional color image features. We demonstrate that estimating depth using a deep neural network and integrating this information with RGB features consistently improves classification performance, particularly for autonomous vehicle applications. Our approach outperforms baseline methods, achieving a classification accuracy of 94.46% on the KITTI dataset, an improvement from 93.5%. This work highlights the potential of low-cost monocular cameras for advanced 3D perception, crucial for developing safer and more reliable autonomous vehicles. Our depth-aware RGBD object classification not only improves perception capabilities but also presents an alternative to expensive lidar-based systems.