With the rapid growth of online communication platforms, the identification and management of toxic comments have become crucial in maintaining a healthy online environment. Various machine learning approaches have been employed to tackle this problem, ranging from traditional models to more recent attention-based transformer networks. This paper aims to compare the performance of attention-based transformer networks with several traditional machine learning methods for toxic comments classification. We present an in-depth analysis and evaluation of these methods using a common benchmark dataset. The experimental results demonstrate the strengths and limitations of each approach, shedding light on the suitability and efficacy of attention-based transformers in this domain.