About one-quarter of all car collisions in the United States are caused by distracted driving, and this ratio is expected to rise. As vehicles are equipped with more elaborate and complex technology, human-vehicle interaction via dashboard displays and controls will become more complex and distracting. Human-vehicle interaction via voice-based technology offers a less distracting alternative. In this study we aim to develop a voice-based car assistant, with a focus on Arabic language speech recognition. We prepare a new 4000-word domain-specific lexicon to comprehensively support driver-vehicle interactions, and we create corresponding text and speech corpora. Then we extract acoustic feature vectors and use various acoustic models to support speech recognition. The language model is created using an n-gram model. Then acoustic and language models, and the lexicon are combined to generate a decoding graph. The text corpus consists of 6110 elements, including words, phrases, and sentences. The speech corpus has more than 60000 recordings (almost 50 hours). For the decoding of noise-free audio, a Deep Neural Network + Hidden Markov Model provided 94.832% accuracy, a Subspace Gaussian Mixture Model + Hidden Markov Model provided 94.2% accuracy, and the best Gaussian Mixture Model + Hidden Markov Model provided 94.13% accuracy. For the decoding of noisy audio, a Deep Neural Network + Hidden Markov Model provided 93.316% accuracy, a Subspace Gaussian Mixture Model + Hidden Markov Model provided 92.62% accuracy, and the best Gaussian Mixture Model + Hidden Markov Model provided 91.82% accuracy. A usability study was conducted on the system with 10 participants. Almost all of the results of that study showed usability ratings of greater than 4.0 out of 5.0. These usability ratings indicate that the proposed system was seen by the participants as important, and useful for reducing driver distraction.