Natural language inference (NLI) is a subfield of natural language processing (NLP) that involves determining the logical relationship between two pieces of text, usually a premise and a hypothesis. The goal of an NLI system is to classify the inference relationship between the premise and the hypothesis into one of three categories, namely, entailment, contradiction and neutral. An understanding of this relationship is useful for performing several NLP tasks, such as, summarization, question-answering, information retrieval, etc. Given the potential of neural networks to handle complex natural language tasks, and the absence of prior research on their application to NLI in Arabic. In this context, we propose the investigation of different neural network models for the NLI task in Arabic language. Particularly, we carried out experiments using various types of recurrent neural networks, such as, Simple RNN, LSTM, GRU, Bi-LSTM, and BiGRU to find the one that performs best on the NLI task in Arabic. We performed our experiments using the existing datasets for Arabic NLI namely, ArbTE, XNLI, and ArNLI.