In this article, studying distributed optimisation over time-varying directed networks where a group of agents aims at cooperatively minimising a sum of local objective functions is focused on. Each agent uses only local computation and communication in the overall process without leaking their private information. Via incorporating both a distributed heavyball method and a distributed Nesterov method, a double accelerated distributed algorithm leveraging a gradient-tracking technique and using uncoordinated step-sizes, is developed. By employing both row-and column-stochastic weight matrices, the proposed algorithm can bypass the implementation of doubly stochastic weight matrices and avoid eigenvector estimation existing in some algorithms using only row-or column-stochastic weight matrices. Under the assumptions that the agents' local objective functions are smooth and strongly convex, and the aggregated directed networks of every finite consecutive directed network are strongly connected, the proposed algorithm is proved to converge linearly to the global optimal solution when the largest step-size is positive and sufficiently small, and the largest momentum parameter is non-negative. The proposed algorithm is also applied to fixed directed networks which are considered as a special case of time-varying directed networks. Simulation results further verify the effectiveness of the proposed algorithm and correctness of the theoretical findings.