The objective of this study is to conduct a comparative analysis of a rule-based ontological classification tool and a large language model (LLM) chatbot as qualitative content analysis tools. The focus is on assessing their strengths and limitations when applied to the study of the discourse surrounding the energy transition. To achieve this, I used the tools to analyse two different types of corpora: citizens' social media discussions and politicians' parliamentary speeches. In the analysis, I evaluated the differences in the methods' recall and precision levels. Additionally, I assessed the extent to which these methods align with scientific principles, including reliability, transparency, and research integrity. The results reveal a classic trade-off: LLM's precision is high, but its recall is comparatively low, suggesting its strength in generating accurate but potentially incomplete analyses. On the other hand, the rule-based method outperforms in recall at the expense of precision, capturing more data points, but with varying accuracy levels. I discuss the implications of the results and outline ideas for leveraging the strengths of both methods in future studies. This article provides researchers with insights into the selection and application of computational tools in the social sciences and humanities, as well as in multifaceted research topics, such as energy transition.