Imitation Learning is a discipline of Machine Learning primarily concerned with replicating observed behavior of agents known to perform well on a given task, collected in demonstration data sets. In this paper, we set out to introduce a pipeline for collecting demonstrations and training models that can produce motion plans for industrial robots. Object throwing is defined as the motivating use case. Multiple input data modalities are surveyed, and motion capture is selected as the most practicable. Two model architectures operating autoregressively are examined -- feedforward and recurrent neural networks. Trained models execute throws on a real robot successfully, and a battery of quantitative evaluation metrics is proposed, including extrapolated throw accuracy estimates. Recurrent neural networks outperform feedforward ones in most respects, with the best models having an assessed mean throw error on the order of 0.1...0.2 m at distances of 1.5...2.0 m. The data collection, pre-processing, and model training aspects of our proposed approach show promise, but further work is required in developing Cartesian motion planning tools before it is suitable for application in production.