With the advancement of science and technology, the development and application of unmanned mobile vehicles (UMVs) have emerged as topics of crucial concern in the global industry. The development goals and directions of UMVs vary according to their industrial uses, which include navigation, autonomous driving, and environmental recognition; these uses have become the priority development goals of researchers in various fields. UMVs employ sensors to collect environmental data for environmental analysis and path planning. However, the analysis function of a single sensor is generally affected by natural environmental factors, resulting in poor identification results. Therefore, this study introduces fusion technology that employs heterogeneous sensors in the Ackerman UMV, leveraging the advantages of each sensor to enhance accuracy and stability in environmental detection and identification. This study proposes a fusion technique involving heterogeneous imaging and LiDAR (laser imaging, detection, and ranging) sensors in an Ackerman UMV. A camera is used to obtain real-time images, and YOLOv4-tiny and simple online real-time tracking are then employed to detect the location of objects and conduct object classification and object tracking. LiDAR is simultaneously used to obtain real-time distance information of detected objects. An inertial measurement unit is used to gather odometry information to determine the position of the Ackerman UMV. Static maps are created using simultaneous localization and mapping. When the user commands the Ackerman UMV to move to the target point, the vehicle control center composed of the robot operating system activates the navigation function through the navigation control module. The Ackerman UMV can reach the destination and instantly identify obstacles and pedestrians when in motion.