The global agriculture industry has faced various problems, such as rapid population growth and climate change. Among several countries, Japan has a declining agricultural workforce. To solve this problem, the Japanese government aims to realize "Smart agriculture" that applies information and communication technology, artificial intelligence, and robotics. Smart agriculture requires the development of robot technology to perform weeding and other labor-intensive agricultural tasks. Robotic weeding consists of an object detection method using machine learning to classify weeds and crops and an autonomous weeding system using robot hands and lasers. However, the approach used for these methods changes depending on the crop growth. The weeding system must consider the combination according to crop growth. This study addresses weed detection and autonomous weeding in crop-weed mixed ridges, such as garlic and ginger fields. We first develop a weed detection method using Mask R-CNN, which can detect individual weeds by instance segmentation from color images captured by an RGB-D camera. The proposed system can obtain weed coordinates in physical space based on the detected weed region and the depth image captured by the camera. Subsequently, we propose an approach to guide the weeding manipulator toward the detected weed coordinates. This paper integrates weed detection and autonomous weeding through these two proposed methods. We evaluate the performance of the Mask R-CNN trained on images taken in an actual field and demonstrate that the proposed autonomous weeding system works on a reproduced ridge with artificial weeds similar to garlic and weed leaves.