Industrial control systems (ICSs) are widely used and vital to industry and society. Their failure can have severe impact on both economics and human life. Hence, these systems have become an attractive target for attacks, both physical and cyber. A number of attack detection methods have been proposed, however they are characterized by a low detection rate, a substantial false positive rate, or are system specific. In this paper, we study an attack detection method based on simple and lightweight neural networks, namely, 1D convolutions and autoencoders. We apply these networks to both the time and frequency domains of the collected data and discuss pros and cons of each approach. We evaluate the suggested method on three popular public datasets and achieve detection rates matching or exceeding previously published detection results, while featuring small footprint, short training and detection times, and generality. We also demonstrate the effectiveness of PCA, which, given proper data preprocessing and feature selection, can provide high attack detection scores in many settings. Finally, we study the proposed method's robustness against adversarial attacks, that exploit inherent blind spots of neural networks to evade detection while achieving their intended physical effect. Our results show that the proposed method is robust to such evasion attacks: in order to evade detection, the attacker is forced to sacrifice the desired physical impact on the system. This finding suggests that neural networks trained under the constraints of the laws of physics can be trusted more than networks trained under more flexible conditions.