The deployment of the robotic system that executes specific task is being challenged by the prevalence of dynamic objects in real‐world scenes. Two robotic tasks sparked by this challenge, known as dynamic Simultaneous Localization and Mapping (SLAM) and moving object perception, are crucial for enhancing system robustness and reinforcing environment awareness. Existing public datasets are diverse in platforms, sensor combinations, scenarios, and label annotations, but few adequately benchmark the above tasks. To fill this gap, we introduce the multimodal and campus‐scapes (M2CS) dataset, providing robot‐centric synchronized LiDAR‐Inertial‐Visual‐GNSS data with 3D moving object annotation in specific dynamic scenarios. The dataset exhibits variation in dynamic object types and densities, annotating over 160,000 Light Detection and Ranging (LiDAR) scans and releasing ground truth of trajectories acquired by the GNSS‐RTK/INS system. The dataset evaluates existing SLAM and moving object perception methods, driving relevant research to overcome this challenge. We publish the M2CS dataset on the website (https://github.com/Zhaohuanfeng/M2CS) and hope it promotes research on robotics in complex environment.