AI ethics is a relatively nascent field, and its importance and recent growth have been the focus of multiple organizations and corporations, whereby multiple guidelines, reports, statements, and initiatives on AI ethics have been developed and published. However, there is still no systematic analysis that provides a comprehensive overview of the various developed AI ethical frameworks. As a result, in this article, we trace and investigate a dataset of 100 documents on AI ethics and principles released between 2015 and 2022 issued by governmental entities, academic institutions, and private corporations. The aim of this investigation is to provide useful insights on the AI ethical landscape. We use text analysis and quantitative data analysis to i highlight five key elements of the dataset as follows: the type of documents created on AI ethics (how), the time period for issuing (when), the type of issuer (who), the geographic distribution (where), and the sectors they cover (what). The findings reveal a gap in the creation of AI ethics between the Global North and the Global South as 72.4% AI ethics documents are released from the former. Furthermore, the analysis shows that private firms are the dominant institutions responsible for developing these frameworks with a percentage of 31.8%, followed by academia (19.1%) and, finally, governments (16.4%). Eventually, we emphasize the need for more sector-specific ethical frameworks which are noticeably lacking and highly needed.