Mobile health wearables are often embedded with small processors for signal acquisition and analysis. These embedded wearable systems are, however, limited with low available memory and computational power. Advances in machine learning, especially deep neural networks (DNNs), have been adopted for efficient and intelligent applications to overcome constrained computational environments. In this study, evolutionary optimized DNNs were analyzed to classify three common airway-related symptoms, namely coughs, throat clears and dry swallows. As opposed to typical microphone-acoustic signals, mechano-acoustic data signals, which did not contain identifiable speech information for better privacy protection, were acquired from laboratory-generated and publicly available datasets. The optimized DNNs had a low footprint of less than 150 kB and predicted airway symptoms of interests with 83.7% accuracy on unseen data. By performing explainable AI techniques, namely occlusion experiments and class activation maps, mel-frequency bands up to 8,000 Hz were found as the most important feature for the classification. We further found that DNN decisions were consistently relying on these specific features, fostering trust and transparency of proposed DNNs. Our proposed efficient and explainable DNN is expected to support edge computing on mechano-acoustic sensing wearables for remote, long-term monitoring of airway symptoms.