New hardware, massively parallel and graphical processing unit-based computers in particular, has boosted molecular simulations to levels that would be unthinkable just a decade ago. At the classical level, it is now possible to perform atomistic simulations with systems containing over 10 million atoms and to collect trajectories extending to the millisecond range. Such achievements are moving biosimulations into the mainstream of structural biology research, complementary to the experimental studies. The drawback of this impressive development is the management of data, especially at a time where the inherent value of data is becoming more apparent. In this review, we summarize the main characteristics of (bio)simulation data, how we can store them, how they can be reused for new, unexpected projects, and how they can be transformed to make them FAIR (findable, accessible, interoperable and reusable).