Autonomous missions of drones require continuous and reliable estimates for the drone's attitude, velocity, and position. Traditionally, these states are estimated by applying Extended Kalman Filter (EKF) to Accelerometer, Gyroscope, Barometer, Magnetometer, and GPS measurements. When the GPS signal is lost, position and velocity estimates deteriorate quickly, especially when using low-cost inertial sensors. This paper proposes an estimation method that uses a Recurrent Neural Network (RNN) to allow reliable estimation of a drone's position and velocity in the absence of GPS signal. The RNN is trained on a public dataset collected using Pixhawk. This low-cost commercial autopilot logs the raw sensor measurements (network inputs) and corresponding EKF estimates (ground truth outputs). The dataset is comprised of 548 different flight logs with flight durations ranging from 4 to 32 minutes. For training, 465 flights are used, totaling 45 hours. The remaining 83 flights totaling 8 hours are held out for validation. Error in a single flight is taken to be the maximum absolute difference in 3D position (MPE) between the RNN predictions (without GPS) and the ground truth (EKF with GPS). On the validation set, the median MPE is 35 meters. MPE values as low as 2.7 meters in a 5-minutes flight could be achieved using the proposed method. The MPE in 90% of the validation flights is bounded below 166 meters. The network was experimentally tested and worked in real-time.