In recent years, online misinformation has become increasingly prevalent, leading to significant issues such as political polarisation and distrust of genuine information. Misinformation on social media platforms affects various aspects of society, including health and politics, and can take many forms, such as text and images. However, current studies mainly focus on analysing singular topics and modalities, without considering the heterogeneity of the issue. Our research aimed to examine the relationship between visual elements and engagement, as well as the relationship between sentiment analysis, hate speech, and bots on a variety of topics on the Twitter social media platform Twitter. We labelled 12,581 misinformation posts that were manually modelled into a topic hierarchy. We then analysed these posts, including their sentiments, the prevalence of hate speech, and bot activity on different topics. The results revealed that political misinformation tends to contain more hate speech than COVID-19 misinformation and that political misinformation also has a higher number of bots. Furthermore, the findings suggest that misinformation online with more than 40% negative sentences can have a high level of hate speech identified for both tweets and replies. This study provides detailed information on topics and the volume of misinformation on social media platforms, and the findings can be used to develop more advanced detection systems and support further analysis. Our findings can help policy makers understand what kind of online misinformation has been spreading on Twitter and how to plan campaigns to make users more aware of how to spot its various features in an online user-to-user Twitter environment.