Like sighted people, visually impaired people want to share photographs on social networking services, but find it difficult to identify and select photos from their albums. We aimed to address this problem by incorporating state-of-the-art computer-generated descriptions into Facebook's photo-sharing feature. We interviewed 12 visually impaired participants to understand their photo-sharing experiences and designed a photo description feature for the Facebook mobile application. We evaluated this feature with six participants in a seven-day diary study. We found that participants used the descriptions to recall and organize their photos, but they hesitated to upload photos without a sighted person's input. In addition to basic information about photo content, participants wanted to know more details about salient objects and people, and whether the photos reflected their personal aesthetic. We discuss these findings from the lens of self-disclosure and self-presentation theories and propose new computer vision research directions that will better support visual content sharing by visually impaired people.CCS Concepts: • Information interfaces and presentations → Multimedia and information systems; • Social and professional topics → People with disabilities Fig. 1. Computer-generated descriptions in Facebook. The text in the white boxes is the descriptions that are read to a blind user by TalkBack. The descriptions are normally invisible. We show it visually to demonstrate the design.While they want to share photos with others, visually impaired people find it difficult to understand the contents of photos and select good photos to post from their albums [30,47]. If a visually impaired person takes a photo and does not upload it immediately, it is hard for her to navigate through the album and find that photo independently, especially when many photos have accumulated in her album over time.Moreover, it is difficult to judge the quality of a photo, for example, to determine whether the photo is blurry, whether a person has her eyes closed in the photo, or whether the photo is aesthetically pleasing [30].Researchers and designers have tackled this problem, proposing techniques to help people with visual impairments access and understand the contents of a photo. Most of these efforts used human-powered services (e.g., crowd workers, friends) to provide photo descriptions or answer a photo-based question [13,15,54,55]. However, such systems are hard to scale and sustain due to the limited number of volunteers, the monetary cost of crowd workers, and the possible social costs involved when asking friends [18,22]. Also, since there can be private information in local photos without the user knowing it, human-powered services also exposed visually impaired users to high privacy risks when sending personal photos to human assistants [16]. Therefore, other approaches are necessary for captioning a number of personal photos from a user's album in a photo-sharing use case.Recent research by Wu et al. presented automatic al...