Cloud computing, with its novel presence at the cloud Edge, proves indispensable to the success of the fifth (5G) and sixth-generation (6G) networks. This due to the myriad set of expected applications and required stringent constraints on latency and bandwidth, e.g., eHealth and Automotive vehicles which fall under the 5G URLLC (Ultra Reliable and Low Latency Communication) set. In order to reduce bandwidth usage at the Cloud core network and allow for faster latency responses, edge computing proposes a new servicing paradigm consisting of bringing computation and intelligence near where the end users reside, i.e., at the edge of the cloud. This is achieved by the edge infrastructure’s ability to support a variety of intelligent and computational jobs at the radio access network (RAN) level. However, many AI tasks, particularly those involving deep learning, pose a challenge as they require significantly more memory and processing capacity, which cannot be provisioned at the edge. Only cloud data centers can do. To address this issue, different edge intelligence methods, such as quantization, pruning, distributed inference, etc., have been proposed in the literature. This research investigates split neural networks (SNN), a neural network architecture with numerous early exits. SNN greatly decreases memory and computing, making it a promising NN architecture for edge devices and applications. Moreover, as the use of these SNNs grows, it becomes increasingly essential to study and validate their safety properties, notably their tolerance to small perturbations, before deploying them in safety-critical applications. This paper presents, as far as we know, the first exploratory work on the robustness assessment of split Edge Cloud neural networks. We evaluated SNN robustness using auto_LiRPA an advance neural network verification tool that is base on bounds calculation. We also compared the relative robustness of SNN to a normal NN, considering the many parameters that influence SNN’s robustness. Our experiment results show that SNN not only reduces the average inference time by 3/4. But it also proved to be four to ten times more resilient against adversarial attacks than normal NN.