Real-time human pose estimation (HPE) using convolutional neural networks (CNN) is critical for enabling machines to better understand human beings based on images and videos, and for assisting supervisors in identifying human behavior. However, CNN-based systems are susceptible to adversarial attacks, and the attacks specifically targeting HPE have received little attention. We present a gradient-based adversarial example generation method, named AdaptiveFool, which is designed to effectively perform a keypoints-invisible attack against OpenPose by aggregating the loss function of human keypoints and generating adaptive adversarial perturbations. In addition, we introduce an object-oriented perturbation generation method during the AdaptiveFool process to eliminate background perturbations. Our proposed method adapts the adversarial perturbations and generates object-oriented perturbations. On COCO 2017 datasets, our method achieves 6.3% mean average precision on OpenPose. This research provides inspiration for future work on developing efficient and effective adversarial example defense methods for HPE.