“…Some participants also performed dependency parsing, POS tagging, stemming and URL resolution, besides other specific tasks, such as filtering out all named entities and keeping only "general" tokens given that they are generally the ones carrying the sentiment (Rotim et al, 2017). Same as track 1, NLTK was the tool mostly used (Ghosal et al, 2017;Deborah et al, 2017;Kumar et al, 2017;Symeonidis et al, 2017;Jiang et al, 2017) for pre-processing, whereas Stanford CoreNLP 15 was used for performing NER, sentence breaking and parsing. (Nasim, 2017;Rotim et al, 2017;Schouten et al, 2017;Chen et al, 2017;Jiang et al, 2017) Figure 7 shows all the techniques used by each system of the 17 participants.…”