Echolocation has been shown to improve the independence of visually impaired people, and using ultrasound in echolocation has more advantages, e.g., higher resolution of object sensing and ease of extraction from background sounds. However, humans cannot make and hear the ultrasound by their original ability. A wearable device enables ultrasonic echolocation, i.e., transmitting ultrasound by an ultrasonic speaker and converting the reflected ultrasound into audible sound. At the same time, the system can recognize the objects using machine learning (ML) and assist the user. Therefore, in this paper, we propose cooperative echolocation that combines the recognition by humans and ML. As the first step toward cooperative echolocation, this paper presents the effectiveness of ML in echolocation. We implemented a prototype device and evaluated the performance of object detection with/without ML. The results showed that the required time for object detection was reduced by 35.8% and the mental workload was significantly decreased by ML. Based on the findings from the evaluation, we discussed the design of cooperative echolocation.