Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there has been a growing emphasis on researching the robustness of ML, current studies primarily concentrate on addressing specific challenges individually. These studies tend to target a particular aspect of robustness and propose innovative techniques to enhance that specific aspect. However, as a capability to respond to unexpected situations, the robustness of ML should be comprehensively built and maintained in every stage. In this paper, we aim to link the varying efforts throughout the whole ML workflow to guide the design of ML-based NIDSs with systematic robustness. Toward this goal, we conduct a methodical evaluation of the progress made thus far in enhancing the robustness of the targeted NIDS application task. Specifically, we delve into the robustness aspects of ML-based NIDSs against adversarial attacks and distribution shift scenarios. For each perspective, we organize the literature in robustness-related challenges and technical solutions based on the ML workflow. For instance, we introduce some advanced potential solutions that can improve robustness, such as data augmentation, contrastive learning, and robustness certification. According to our survey, we identify and discuss the ML robustness research gaps and future direction in the field of NIDS. Finally, we highlight that building and patching robustness throughout the life cycle of an ML-based NIDS is critical.