This paper investigates techniques that can be utilized to bridge the reality gap between virtual and physical robots, by implementing a virtual environment and a physical robotic platform to evaluate the robustness of transfer learning from virtual to real-world robots. The proposed approach utilizes two reinforcement (RL) learning methods: deep Q-learning and Actor-Critic methodology to create a model that can learn from a virtual environment and performs in a physical environment. Techniques such as domain randomization and induced noise during training were utilized to bring variability and ultimately improve the learning policies. The experimental results demonstrate the effectiveness of the Actor-Critic reinforcement learning technique to bridge the reality gap.