Matrix inversion is commonly encountered in the field of mathematics. Therefore, many methods, including zeroing neural network (ZNN), are proposed to solve matrix inversion. Despite conventional fixed-parameter ZNN (FPZNN), which can successfully address the matrix inversion problem, it may focus on either convergence speed or robustness. So, to surmount this problem, a double accelerated convergence ZNN (DAZNN) with noise-suppression and arbitrary time convergence is proposed to settle the dynamic matrix inversion problem (DMIP). The double accelerated convergence of the DAZNN model is accomplished by specially designing exponential decay variable parameters and an exponential-type sign-bi-power activation function (AF). Additionally, two theory analyses verify the DAZNN model’s arbitrary time convergence and its robustness against additive bounded noise. A matrix inversion example is utilized to illustrate that the DAZNN model has better properties when it is devoted to handling DMIP, relative to conventional FPZNNs employing other six AFs. Lastly, a dynamic positioning example that employs the evolution formula of DAZNN model verifies its availability.