A common concern in experimental research is the auditability and reproducibility of experiments. Experiments are usually designed, provisioned, managed, and analyzed by diverse teams of specialists (e.g., researchers, technicians and engineers) and may require many resources (e.g. cloud infrastructure, specialized equipment). Even though researchers strive to document experiments accurately, this process is often lacking, making it hard to reproduce them. Moreover, when it is necessary to create a similar experiment, very often we end up "reinventing the wheel" as it is easier to start from scratch than trying to reuse existing work, thus losing valuable embedded best practices and previous experiences. In behavioral studies this has contributed to the reproducibility crisis. To tackle this challenge, we propose the "Experiments as Code" paradigm, where the whole experiment is not only documented but additionally the automation code to provision, deploy, manage, and analyze it is provided. To this end we define the Experiments as Code concept, provide a taxonomy for the components of a practical implementation, and provide a proof of concept with a simple desktop VR experiment that showcases the benefits of its "as code" representation, i.e., reproducibility, auditability, debuggability, reusability, & scalability.