YouTube has revolutionized the way people discover and consume video content. Although YouTube facilitates easy access to hundreds of well-produced educational, entertaining, and trustworthy news videos, abhorrent, misinformative and mistargeted content is also common.The platform is plagued by various types of inappropriate content including: 1) disturbing videos targeting young children; 2) hateful and misogynistic content; and 3) pseudoscientific and conspiratorial content. While YouTube's recommendation algorithm plays a vital role in increasing user engagement and YouTube's monetization, its role in unwittingly promoting problematic content is not entirely understood.In this thesis, we shed some light on the degree of abhorrent, misinformative, and mistargeted content on YouTube and the role of the recommendation algorithm in the discovery and dissemination of such content. Following a data-driven quantitative approach, we analyze thousands of videos posted on YouTube. Specifically, we devise various methodologies to detect problematic content, and we use them to simulate the behavior of users casually browsing YouTube to shed light on: 1) the risks of YouTube media consumption by young children; 2) the role of YouTube's recommendation algorithm in the dissemination of hateful and misogynistic content, by focusing on the Involuntary Celibates (Incels) community; and 3) user exposure to pseudoscientific misinformation on various parts of the platform and how this exposure changes based on the user's watch history.First and foremost, I am grateful to my advisor, Michael Sirivianos, for his continuous support and valuable feedback throughout my PhD journey. His support and guidance was instrumental in turning me into an independent and competent researcher. He was there to guide me when the research seemed fuzzy and disheartening. More importantly, he has shown me how to analyze an important and complex problem, and divide it in small manageable problems that can be addressed in a more practical way.