Racial discrimination in housing has long fueled disparities in homeownership and wealth in the United States. Now, automated algorithms play a dominant role in rental and lending decisions. Advocates of these technologies argue that mortgage lending algorithms reduce discrimination. However, “errors in background check reports persist and remain pervasive,” and algorithms are at risk for inheriting prejudices from society and reflect pre-existing patterns of inequality. Additionally, algorithmic discrimination is often challenging to identify and difficult to explain or prosecute in court. While the Federal Trade Commission (FTC) is responsible for prosecuting this type of discrimination under the Fair Credit Reporting Act (FCRA), their enforcement regime “has inadequately regulated industry at the federal and state level and failed to provide consumers access to justice at an individual level,” as evidenced by its mere eighty-seven enforcement actions in the past forty years. In comparison, 4,531 lawsuits have been brought under the FCRA by other groups in 2018 alone. Therefore, the FTC must update its policies to ensure it can identify, prosecute, and facilitate third-party lawsuits against a primary driver of housing discrimination in the 21st century: discrimination within algorithmic decision making. We recommend that the FTC issue a rule requiring companies to publish a data plan with all consumer reporting products. Currently, the FTC recommends that companies make an internal assessment of the components of the proposed data plan to ensure that they are not in violation of the FCRA. Therefore, requiring that these plans be published publicly does not place undue burden on companies and empowers consumers to advocate for themselves and report unfair practices to the FTC. Coupled together, these will reduce the costs of investigation and enforcement by the FTC and decrease the discriminatory impact of automated decision systems on marginalized communities.