In order to enable robots to be more intelligent and flexible, one way is to let robots learn human control strategy from demonstrations. It is a useful methodology, in contrast to traditional preprograming methods, in which robots are required to show generalizing capacity in similar scenarios. In this study, we apply learning from demonstrations on a wheeled, inverted pendulum, which realizes the balance controlling and trajectory following simultaneously. The learning model is able to map the robot position and pose to the wheel speeds, such that the robot regulated by the learned model can move in a desired trajectory and finally stop at a target position. Experiments were undertaken to validate the proposed method by testing its capacity of path following and balance guaranteeing.