Convolutional neural networks (CNNs) serve as powerful tools in computer vision tasks with extensive applications in daily life. However, they are susceptible to adversarial attacks. Still, attacks can be positive for at least two reasons. First, revealing CNNs’ vulnerabilities prompts efforts to enhance their robustness. Second, adversarial images can also be employed to preserve private and sensitive information from CNN-based threat models aiming to extract such data from images. For such applications, the construction of high-resolution adversarial images is mandatory in practice. This paper achieves the following: first, it quantifies the speed, adversity, and visual quality challenges involved in the effective construction of high-resolution adversarial images; second, it provides the operational design of a new strategy, called here the noise blowing-up strategy, which works for any attack, any scenario, any CNN, and any clean image; third, it validates this strategy via an extensive series of experiments. We performed experiments with 100 high-resolution clean images, exposing them to 7 different attacks against 10 CNNs. Our method achieved an overall average success rate of 75% in the targeted scenario and 64% in the untargeted scenario. We revisited the failed cases and performed a slight modification to our method, which led to success rates larger than 98.9%. As of today, the noise blowing-up strategy is the first generic approach that successfully solves all three speed, adversity, and visual quality challenges, therefore effectively constructing high-resolution adversarial images with high-quality requirements.