The present study compares the performance of three machine translation tools in English-Arabic translation to answer the questions of (a) whether the three machine translation tools, Google Translate, Systran and Microsoft Bing can be ordered in a hierarchy of performance, and (b) whether they can handle lexically and structurally ambiguous sentences and garden path sentences. Using a number of constructed and selected English sentences, the morphosyntactic features of number, gender, case, definiteness, and humanness, agreement between cardinal numerals and their head nouns, lexically and structurally ambiguous sentences and garden path sentences are used to test the three machine translation tools for performance. The results show that (a) as far as handling the morphosyntactic features of subject-verb agreement in Standard Arabic, all three machine translation tools perform equally well, and no machine translation tool seems to perform significantly better than the other two (b) some marked features (e.g. dual number and humanness) of SA seem to pose some problems for machine translation tools, and (c) lexically and structurally ambiguous sentences and garden path sentences seem to be the most challenging sentences for the three machine translation tools.