Recent research has revealed that subtle imperceptible perturbations can deceive well-trained neural network models, leading to inaccurate outcomes. These instances, known as adversarial examples, pose significant threats to the secure application of machine learning techniques in safety-critical systems. In this paper, we delve into the study of one-pixel attacks in deep neural networks, recently reported as a kind of adversarial examples. To identify such one-pixel attacks, most existing methodologies rely on the differential evolution method, which utilizes random selection from the current population to escape local optima. However, the differential evolution technique might waste search time and overlook good solutions if the number of iterations is insufficient. Hence, in this paper, we propose a gradient ascent with momentum approach to efficiently discover good solutions for the one-pixel attack problem. As our method takes a more direct route to the goal compared to existing methods relying on blind random walks, it can effectively identify one-pixel attacks. Our experiments conducted on popular CNNs demonstrate that, in comparison with existing methodologies, our technique can detect one-pixel attacks significantly faster.