As machine and deep learning spread across diverse aspects of our society, the concerns about the privacy of the data are getting stronger, particularly in scenarios where sensitive information could be exposed as a result of various privacy attacks. This paper introduces a novel framework, DP Patch, aimed at addressing these privacy concerns in image data by considering sensitive objects that could be located within the image rather than considering the entire image as sensitive. DP Patch involves a multi-step pipeline, which consists of differential privacy image denoising and ROI-based sensitive object localization, followed by incorporating DP noise patches to obscure sensitive content. This process yields privacy-preserving images with enhanced utility compared to DP images. Furthermore, a custom model is presented that harnesses privacy-preserving and differentially private images to enrich feature representation and compensate for potential information loss, explicitly excluding the noisy patch from the training process. Experimental evaluations are conducted to assess the quality of the generated privacy-preserving images and to compare the performance of the custom model against state-of-the-art counterparts. Additionally, the proposed method undergoes evaluation under model inversion attacks, providing practical insights into its effectiveness.