In unknown environments, mobile robots can use visual-based Simultaneous Localization and Mapping (vSLAM) to complete positioning tasks while building sparse feature maps and dense maps. However, the traditional vSLAM works in the hypothetical environment of static scenes and rarely considers the dynamic objects existing in the actual scenes. In addition, it is difficult for the robot to perform high-level semantic tasks due to its inability to obtain semantic information from sparse feature maps and dense maps. In order to improve the ability of environment perception and accuracy of mapping for mobile robots in dynamic indoor environments, we propose a semantic information-based optimized vSLAM algorithm. The optimized vSLAM algorithm adds the modules of dynamic region detection and semantic segmentation to ORB-SLAM2. First, a dynamic region detection module is added to the vision odometry. The dynamic region of the image is detected by combining single response matrix and dense optical flow method to improve the accuracy of pose estimation in dynamic environment. Secondly, the semantic segmentation of images is implemented based on BiSeNet V2 network. For the over-segmentation problem in semantic segmentation, a region growth algorithm combining depth information is proposed to optimize the 3D segmentation. In the process of map building, semantic information and dynamic regions are used to remove dynamic objects and build an indoor map containing semantic information. The system not only can effectively remove the effect of dynamic objects on the pose estimation, but also use the semantic information of images to build indoor maps containing semantic information. The proposed algorithm is evaluated and analyzed in TUM RGB-D dataset and real dynamic scenes. The results show that the accuracy of our algorithm outperforms that of ORB-SLAM2 and DS-SLAM in dynamic scenarios.