The use of convolutional neural networks (CNN) for image quality assessment (IQA) becomes many researcher's focus. Various pretrained models are fine-tuned and used for this task. In this paper, we conduct a benchmark study of seven state-of-the-art pre-trained models for IQA of omnidirectional images. To this end, we first train these models using an omnidirectional database and compare their performance with the pre-trained versions. Then, we compare the use of viewports versus equirectangular (ERP) images as inputs to the models. Finally, for the viewports-based models, we explore the impact of the input number of viewports on the models' performance. Experimental results demonstrated the performance gain of the re-trained CNNs compared to their pre-trained versions. Also, the viewports-based approach outperformed the ERP-based one independently of the number of selected views.