By relabeling past experience with heuristic or curriculum goals, state-of-the-art reinforcement learning (RL) algorithms such as hindsight experience replay (HER), hindsight goal generation (HGG), and graph-based hindsight goal generation (G-HGG) have been able to solve challenging robotic manipulation tasks in multi-goal settings with sparse rewards. HGG outperforms HER in challenging tasks in which goals are difficult to explore by learning from a curriculum, in which intermediate goals are selected based on the Euclidean distance to target goals. G-HGG enhances HGG by selecting intermediate goals from a precomputed graph representation of the environment, which enables its applicability in an environment with stationary obstacles. However, G-HGG is not applicable to manipulation tasks with dynamic obstacles, since its graph representation is only valid in static scenarios and fails to provide any correct information to guide the exploration. In this paper, we propose bounding box-based hindsight goal generation (Bbox-HGG), an extension of G-HGG selecting hindsight goals with the help of image observations of the environment, which make it applicable to tasks with dynamic obstacles. We evaluate Bbox-HGG on four challenging manipulation tasks, where significant enhancements in both sample efficiency and overall success rate are shown over state-of-the-art algorithms. The videos can be viewed at https://videoviewsite.wixsite.com/bbhgg.