Next-generation communication networks, also known as NextG or 5G and beyond, are the future data transmission systems that aim to connect a large amount of Internet of Things (IoT) devices, systems, applications, and consumers at high-speed data transmission and low latency. Fortunately, NextG networks can achieve these goals with advanced telecommunication, computing, and Artificial Intelligence (AI) technologies in the last decades and support a wide range of new applications. Among advanced technologies, AI has a significant and unique contribution to achieving these goals for beamforming, channel estimation, and Intelligent Reflecting Surfaces (IRS) applications of 5G and beyond networks. However, the security threats and mitigation for AI-powered applications in NextG networks have not been investigated deeply in academia and industry due to being new and more complicated. This paper focuses on an AI-powered IRS implementation in NextG networks along with its vulnerability against adversarial machine learning attacks. This paper also proposes the defensive distillation mitigation method to defend and improve the robustness of the AI-powered IRS model, i.e., reduce the vulnerability. The results indicate that the defensive distillation mitigation method can significantly improve the robustness of AI-powered models and their performance under an adversarial attack. 15 16 ∇ x (x adv , y) + U( ) 245 -Add the gradient to the input data, x adv = x adv + 246 α × sign( ∇x ), 247 where is the budget, N is the number of iterations, and α is 248 the step size. PGD can generate stronger attacks than FGSM 249 and BIM. 250 4) MOMENTUM ITERATIVE METHOD (MIM) 251 MIM is a variant of the BIM adversarial attack, introducing 252 momentum and integrating it into iterative attacks [19]. It is 253 used to compute the gradient of the loss function with respect 254 to the input, x, and then the attacker creates the adversarial 255 example by adding the sign of the gradient to the input data. 256 The gradient sign is computed using the backpropagation 257 algorithm. The steps are summarized as follows: 258 • Initialize the adversarial example x adv = x and the 259 momentum, µ = 0 260 • Iterate i times, where i = 0, 1, 2, 3, . . . , N 261 -Compute the gradient of loss function, ∇ x (x adv , y) 262 -Update the momentum, µ = µ + η × ∇ x (x adv , y) 263 -Add random noise to the gradient, ∇x (x adv , y) = 264 ∇ x (x adv , y) + U( ) 265 -Add the gradient to the input data, x adv = x adv + 266 α × sign( ∇x ), 267 where is the budget, N is the number of iterations, η is the 268 momentum rate, and α is the step size. 269 Note that there are many types of adversarial attacks and 270 defenses. The existing defenses and adversarial attacks for 271 images can be applied to attack and defend on intelligent 272 reflecting surfaces and other fields [20], [21], [22], [23]. 273 The cleverly-designed adversarial examples can fool the deep 274 neural networks with high success rates on the test images. 275 The adversarial examples can also be transfer...