A new algorithm is derived for computing the actions f (tA)B and f (tA 1/2 )B, where f is cosine, sinc, sine, hyperbolic cosine, hyperbolic sinc, or hyperbolic sine function. A is an n × n matrix and B is n×n 0 with n 0 n. A 1/2 denotes any matrix square root of A and it is never required to be computed. The algorithm offers six independent output options given t, A, B, and a tolerance. For each option, actions of a pair of trigonometric or hyperbolic matrix functions are simultaneously computed. The algorithm scales the matrix A down by a positive integer s, approximates f (s −1 tA)B by a truncated Taylor series, and finally uses the recurrences of the Chebyshev polynomials of the first and second kind to recover f (tA)B. The selection of the scaling parameter and the degree of Taylor polynomial are based on a forward error analysis and a sequence of the form A k 1/k in such a way the overall computational cost of the algorithm is optimized. Shifting is used where applicable as a preprocessing step to reduce the scaling parameter. The algorithm works for any matrix A and its computational cost is dominated by the formation of products of A with n × n 0 matrices that could take advantage of the implementation of level-3 BLAS. Our numerical experiments show that the new algorithm behaves in a forward stable fashion and in most problems outperforms the existing algorithms in terms of CPU time, computational cost, and accuracy.