Matrix-by-matrix multiplication (hereafter referred to as MM) is a fundamental operation omnipresent in modern computations in Numerical and Symbolic Linear Algebra. Its acceleration makes major impact on various fields of Modern Computation and has been a highly recognized research subject for about five decades. The researchers introduced amazing novel techniques, found new insights into MM and numerous related computational problems, and devised advanced algorithms that performed n × n MM by using less than O(n 2.38 ) scalar arithmetic operations versus 2n 3 − n 2 of the straightforward MM, that is, more than half-way to the information lower bound n 2 . The record upper bound 3 of 1968 on the exponent of the complexity MM decreased below 2.38 by 1987 and has been extended to various celebrated problems in many areas of computing and became most extensively cited constant of the Theory of Computing. The progress in decreasing the record exponent, however, has virtually stalled since 1987, while many scientists are still anxious to know its sharp bound, so far restricted to the range from 2 to about 2.3728639. Narrowing this range remains a celebrated challenge.Acceleration of MM in the Practice of Computing is a distinct challenge, much less popular, but also highly important. Since 1980 the progress towards meeting the two challenges has followed two distinct paths because of the curse of recursion -all the known algorithms supporting the exponents below 2.38 or even below 2.7733 involve long sequences of nested recursive steps, which blow up the size of an input matrix. As a result all these algorithms improve straightforward MM only for unfeasible MM of immense size, greatly exceeding the sizes of interest nowadays and in any foreseeable future.It is plausible and surely highly desirable that someone could eventually decrease the record MM exponent towards its sharp bound 2 even without ignoring the curse of recursion, but currently there are two distinct challenges of the acceleration of feasible and unfeasible MM.In particular various known algorithms supporting the exponents in the range between 2.77 and 2.81 are quite efficient for feasible MM and have been implemented. Some of them make up a valuable part of modern software for numerical and symbolic matrix computations, extensively worked on in the last decade. Still, that work has mostly relied on the MM algorithms proposed more than four decades ago, while more efficient algorithms are well known, some of them appeared in 2017.In our review we first survey the mainstream study of the acceleration of MM of unbounded sizes, cover the progress in decreasing the exponents of MM, comment on its impact on the theory and practice of computing, and recall various fundamental concepts and techniques supporting fast MM and naturally introduced in that study by 1980. Then we demonstrate how the curse of recursion naturally entered the game of decreasing the record exponents. Finally we cover the State of the Art of efficient feasible MM, including some most ef...