The internet of things (IoT) faces tremendous security challenges. Machine learning models can be used to tackle the growing number of cyber-attack variations targeting IoT systems, but the increasing threat posed by adversarial attacks restates the need for reliable defense strategies. This work describes the types of constraints required for a realistic adversarial cyber-attack example and proposes a methodology for a trustworthy adversarial robustness analysis with a realistic adversarial evasion attack vector. The proposed methodology was used to evaluate three supervised algorithms, random forest (RF), extreme gradient boosting (XGB), and light gradient boosting machine (LGBM), and one unsupervised algorithm, isolation forest (IFOR). Constrained adversarial examples were generated with the adaptative perturbation pattern method (A2PM), and evasion attacks were performed against models created with regular and adversarial training. Even though RF was the least affected in binary classification, XGB consistently achieved the highest accuracy in multi-class classification. The obtained results evidence the inherent susceptibility of tree-based algorithms and ensembles to adversarial evasion attacks and demonstrate the benefits of adversarial training and a security-by-design approach for a more robust IoT network intrusion detection and cyber-attack classification.