Three-dimensional (3D) imaging technologies have been increasingly explored in academia and the industrial sector, especially the ones yielding point clouds. However, obtaining these data can still be expensive and time-consuming, reducing the efficiency of procedures dependent on large datasets, such as the generation of data for machine learning training, forest canopy calculation, and subsea survey. A trending solution is developing simulators for imaging systems, performing the virtual scanning of the digital world, and generating synthetic point clouds from the targets. This work presents a guideline for the development of modular Light Detection and Ranging (LiDAR) system simulators based on parallel raycasting algorithms, with its sensor modeled by metrological parameters and error models. A procedure for calibrating the sensor is also presented, based on comparing with the measurements made by a commercial LiDAR sensor. The sensor simulator developed as a case study resulted in a robust generation of synthetic point clouds in different scenarios, enabling the creation of datasets for use in concept tests, combining real and virtual data, among other applications.