“…The system described in this paper builds upon a survey of some of the best performing systems from previous related shared tasks Rosenthal et al, 2017). In particular, we draw inspiration from the systems described in (John and Vechtomova, 2017), which makes use of gradient boosted trees for regression; (Goel et al, 2017), which employs an ensemble of various neural models; and (Baziotis et al, 2017), which features Long Short Term Memory (LSTM) networks with an attention mechanism. Our work contributes to the aforementioned approaches by further developing a variety of neural architectures, using transfer learning via pretrained sentence encoders, testing methods of ensembling neural and non-neural models, and gauging the performance and stability of a regressor across languages.…”