Recommendations algorithms of social media platforms are often criticized for placing users in "rabbit holes" of (increasingly) ideologically biased content. Despite these concerns, prior evidence on this algorithmic radicalization is inconsistent. Furthermore, prior work lacks systematic interventions that reduce the potential ideological bias in recommendation algorithms. We conduct a systematic audit of YouTube's recommendation system using a hundred thousand sock puppets to determine the presence of ideological bias (i.e., are recommendations aligned with users' ideology), its magnitude (i.e., are users recommended an increasing number of videos aligned with their ideology), and radicalization (i.e., are the recommendations progressively more extreme). Furthermore, we design and evaluate a bottomup intervention to minimize ideological bias in recommendations without relying on cooperation from YouTube. We find that YouTube's recommendations do direct users -especially right-leaning users -to ideologically biased and increasingly radical content on both homepages and in up-next recommendations. Our intervention effectively mitigates the observed bias, leading to more recommendations to ideologically neutral, diverse, and dissimilar content, yet debiasing is especially challenging for right-leaning users. Our systematic assessment shows that while YouTube recommendations lead to ideological bias, such bias can be mitigated through our intervention. * Additional information, including the accompanying source code and data, is available at: https://github.com/ucdavis-noyce/YouTube-Auditing-Mitigating-Bias 1 Naturally, most citizens do not inhabit insular communities online [7,8]. The small set that does, however, is socially consequential. Those users tend to be more partisan, more politically active, and have a disproportionate influence on the democratic process [9,10]