With the increasing popularity of machine learning in image processing, privacy concerns have emerged as a significant issue in deploying and using ML services. However, current privacy protection approaches often require computationally expensive training from scratch or extensive fine-tuning of models, posing significant barriers to the development of privacy-conscious models, particularly for smaller organizations seeking to comply with data privacy laws. In this thesis, we address the privacy challenges in computer vision by investigating the effectiveness of two recent fine-tuning methods, Model Reprogramming and Low-Rank Adaptation. We adapt these techniques to provide attribute privacy protection for pre-trained models, minimizing computational overhead and training time. We integrate these methods into an adversarial min-max framework, allowing us to conceal sensitive information from feature outputs without extensive modifications to the pre-trained model, but rather focusing on a small set of new parameters. We demonstrate the effectiveness of our methods by conducting experiments on the CelebA dataset, achieving state-of-the-art performance while significantly reducing computational complexity and cost. Our research provides a valuable contribution to the field of computer vision and privacy, offerii