Research and innovation on wearable sensor technology has grown exponentially over this decade. The privacy-related concerns have grown in tandem. Automatic Ingestion Monitor v2 (AIM-2) is an egocentric camera and sensor that aids monitoring of individual diet and eating behavior by capturing still images throughout the day and using sensor data to detect eating. The images may be used to recognize foods being eaten, eating environment, and other behaviors and daily activities. At the same time, captured images may carry privacy concerning content such as (1) people in social eating and/or bystanders (i.e., bystander privacy); (2) sensitive documents that may appear on a computer screen in the view of AIM-2 (i.e., context privacy). Managing privacy concerns by discarding images potentially containing private content is not practical as many activities are performed socially or with the use of computing devices. Therefore, we propose an automatic, image redaction-based privacy protection by selective content removal based on semantic segmentation using a deep learning neural network. We prioritize removal of humans and display screens. A free-living dataset containing 1 day of data from 15 participants was used to develop and validate the proposed method. We used 331,614 augmented data from 9 participates and publicly available data for training. The proposed method was validated on data of 6 participants and reported bystander privacy removal with precision of 0.87 and recall of 0.94 and reported context privacy removal by precision and recall of 0.97 and 0.98. The results of the study showed that selective content removal using deep learning neural network is a much more desirable approach to address privacy concerns for an egocentric wearable camera.