A time of computing has a significant importance in some domains. Especially, it comes a medicine. Such method as an approximate entropy is widely used to analyze a biomedical data, but it has non-linear algorithmic complexity. Therefore, there is a requirement to decrease a time of calculating of an approximate entropy. In this paper new approach of calculating by using matrices and graphic processor unit is proposed. In addition, some results of this approach are shown. In order to propose a solution to this problem, it was necessary to evaluate the complexity of the algorithm for calculating approximate entropy. In this regard, it is also necessary to show its asymptotic complexity using the notation big-O. Based on the obtained complexity estimates, a new approach based on matrix calculations was developed and proposed. In this case, the matrix calculations themselves are implemented in the form of parallel computing using a graphics processor (GPU). The graphics processor has been widely used for machine learning and data mining, where parallel computing is of great importance in solving the problem of improving performance. Since matrix operations are well parallelized, this makes it possible to speed up the execution of matrix calculations. For calculations, one of the most promising tools is currently the TensorFlow platform. The comparative effectiveness of the proposed approach was evaluated.