YouTube is one of the most popular social media and online video sharing platforms, and users turn to it for entertainment by consuming music videos, for educational or political purposes, advertising, etc. In the last years, hundreds of new channels have been creating and sharing videos targeting children, with themes related to animation, superhero movies, comics, etc. Unfortunately, many of these videos have been found to be inappropriate for consumption by their target audience, due to disturbing, violent, or sexual scenes.In this paper, we study YouTube channels that were found to post suitable or disturbing videos targeting kids in the past. Unfortunately, we identify a clear discrepancy between what YouTube assumes and flags as inappropriate content and channel, vs. what is found to be disturbing content and still available on the platform, targeting kids. In particular, we find that almost 60% of videos that were manually annotated and classified as disturbing by an older study in 2019 (a collection bootstrapped with Elsa and other keywords related to children videos), are still available on YouTube in mid 2021. In the meantime, 44% of channels that uploaded such disturbing videos, have yet to be suspended and their videos to be removed. For the first time in literature, we also study the "made-ForKids" flag, a new feature that YouTube introduced in the end of 2019, and compare its application to the channels that shared disturbing videos, as flagged from the previous study. Apparently, these channels are less likely to be set as "madeForKids" than those sharing suitable content. In addition, channels posting disturbing videos utilize their channel features such as keywords, description, topics, posts, etc., in a way that they appeal to kids (e.g., using game-related keywords). Finally, we use a collection of such channel and content features to train machine learning classifiers that are able to detect, at channel creation time, when a channel will be related to disturbing content uploads. These classifiers can help YouTube content moderators reduce such incidences, by pointing to potentially suspicious accounts, without analyzing actual videos, but instead only using channel characteristics.