LGBTQ+ communities were among the first to appropriate the Internet to experiment with their identities and socialize outside of mainstream society. Recently, those platforms have implemented algorithmic systems that curate, exploit, and predict user practices and identities. Yet, the social implications that platform algorithms raise for LGBTQ+ communities remain largely unexplored. Drawing from critical platform studies, science and technology studies, as well as gender and sexuality studies, this paper maps the main issues that platform algorithms raise for LGBTQ+ users and analyzes their implications for social justice and equity. To do so, it identifies and discusses public controversies through a review and analysis of journalistic articles. Our analysis points to five important algorithmic issues that affect the lives of LGBTQ+ users in ways that require additional scrutiny from researchers, policymakers, and tech developers alike: the ability for sorting algorithms to identify, categorize, and predict the sexual orientation and/or gender identity of users; the role that recommendation algorithms play in mediating LGBTQ+ identities, kinship, and cultures; the development of automated anti-LGBTQ+ speech detection/filtering software and the collateral harm caused to LGBTQ+ users; the power struggles over the nature and types of visibility afforded to LGBTQ+ issues online; and the overall enactment of cisheteronormative biases by platform affordances.