Motion platforms have been widely used in Virtual Reality (VR) systems for decades to simulate motion in virtual environments, and they have several applications in emerging fields such as driving assistance systems, vehicle automation and road risk management. Currently, the development of new VR immersive systems faces unique challenges to respond to the user’s requirements, such as introducing high-resolution 360° panoramic images and videos. With this type of visual information, it is much more complicated to apply the traditional methods of generating motion cues, since it is generally not possible to calculate the necessary corresponding motion properties that are needed to feed the motion cueing algorithms. For this reason, this paper aims to present a new method for generating non-real-time gravito-inertial cues with motion platforms using a system fed both with computer-generated—simulation-based—images and video imagery. It is a hybrid method where part of the gravito-inertial cues—those with acceleration information—are generated using a classical approach through the application of physical modeling in a VR scene utilizing washout filters, and part of the gravito-inertial cues—the ones coming from recorded images and video, without acceleration information—were generated ad hoc in a semi-manual way. The resulting motion cues generated were further modified according to the contributions of different experts based on a successive approximation—Wideband Delphi-inspired—method. The subjective evaluation of the proposed method showed that the motion signals refined with this method were significantly better than the original non-refined ones in terms of user perception. The final system, developed as part of an international road safety education campaign, could be useful for developing further VR-based applications for key fields such as driving assistance, vehicle automation and road crash prevention.