Social Media has revolutionized how individuals, groups, and communities interact. This immense quantity of unstructured data holds valuable information expressed in informal language. However, automatically extracting this information using Natural Language Processing requires adaptations of traditional methods or the development of new strategies capable of extracting information tackling web-prone language. BERT, a Deep Learning methodology proposed by Google in 2018, brought transfer learning to Natural Language Processing. In this work, we used a BERT model for the Portuguese language called BERTimbau to create models for Sentiment Analysis, Aspect Extraction, Hate Speech Detection, and Irony Detection. We experimented with the two BERTimbau models, base and large. Finally, we compared the results obtained in each task. Experiments with BERTimbau based models obtained improved results, F-Measure of 0.88 and 0.89 in Sentiment Analysis and Hate Speech Detection tasks, respectively, compared to classical Machine Learning approaches.