In an attempt to overcome problems associated with articulatory limitations and generative models, this work considers the use of phonological features in discriminative models for disabled speech. Specifically, we train feed-forward and recurrent neural networks, and radial basis and sequence-kernel support vector machines to abstractions of the vocal tract, and apply these models to phone recognition on dysarthric speech. The results show relative error reduction of between 1.5% and 10.9% with this approach against standard hidden Markov modeling, and increases in accuracy with speaker intelligibility across all classifiers. This work may be applied within components of assistive software for speakers with dysarthria.