Neural networks are increasingly used for intrusion detection on industrial control systems (ICS). With neural networks being vulnerable to adversarial examples, attackers who wish to cause damage to an ICS can attempt to hide their attacks from detection by using adversarial example techniques. In this work we address the domain specific challenges of constructing such attacks against autoregressive based intrusion detection systems (IDS) in a ICS setting.We model an attacker that can compromise a subset of sensors in a ICS which has a LSTM based IDS. The attacker manipulates the data sent to the IDS, and seeks to hide the presence of real cyber-physical attacks occurring in the ICS.We evaluate our adversarial attack methodology on the Secure Water Treatment system when examining solely continuous data, and on data containing a mixture of discrete and continuous variables. In the continuous data domain our attack successfully hides the cyber-physical attacks requiring 2.87 out of 12 monitored sensors to be compromised on average. With both discrete and continuous data our attack required, on average, 3.74 out of 26 monitored sensors to be compromised.