Simultaneous localization and mapping (SLAM) is one fundamental topic in robotics due to its applications in autonomous driving. Over the last decades, many systems have been proposed, working on data coming from different sensors, such as cameras or LiDARs. Although excellent results were reached, the majority of these methods exploit the data as is, without extracting additional information or considering multiple sensors simultaneously. In this paper, we present MCS-SLAM, a Graph SLAM system that performs sensor fusion by exploiting multi-cues extracted from sensor data: color/intensity, depth/range and normal information. For each sensor, motion estimation is achieved through minimization of the pixel-wise difference between two multi-cue images. All estimates are then collectively optimized to achieve a coherent transformation. Point clouds received as input are also used to perform loop detection and closure. We compare the performance of the proposed system with state-of-theart point cloud-based methods, LeGO-LOAM-BOR, LIO-SAM, HDL and ART-SLAM, and show that the proposed algorithm achieves less accuracy than the state-of-the-art, while needing much less computational time. The comparison is made by evaluating the estimated trajectory displacement, using the KITTI dataset.