Dysarthria due to Amyotrophic Lateral Sclerosis (ALS) and Parkinson's disease (PD) impacts both articulation and prosody in an individual's speech. Complex deep neural networks exploit these cues for detection of ALS and PD. These are typically done using recordings in laboratory condition. This study aims to examine the robustness of these cues against background noise and model complexity, which has not been investigated before. We perform classification experiments with pitch and Mel-frequency cepstral coefficients (MFCC) using models of three different complexities and additive white Gaussian noise in four signal-to-noise-ratio (SNR) conditions. The findings are as follows: 1) In clean condition, pitch performs similar to MFCC across most model complexities considered, suggesting that one-dimensional pitch pattern provides discriminative cues for the classification to an extent equal to that of multi-dimensional MFCC, 2) Similar trend is observed in noisy cases when classifiers are trained and tested in matched noise and SNR conditions, 3) When the classifiers trained on clean data are applied in noisy cases, pitch based average classification accuracies are found to be 20.09% and 24.73% higher than those using MFCC for ALS vs. healthy and PD vs. healthy, respectively, suggesting robustness of pitch based classifier against noise and model complexity.