A tight lower bound for required I/O when computing an ordinary matrix-matrix multiplication on a processor with two layers of memory is established. Prior work obtained weaker lower bounds by reasoning about the number of segments needed to perform C := AB, for distinct matrices A, B, and C, where each segment is a series of operations involving M reads and writes to and from fast memory, and M is the size of fast memory. A lower bound on the number of segments was then determined by obtaining an upper bound on the number of elementary multiplications performed per segment. is paper follows the same high level approach, but improves the lower bound by (1) transforming algorithms for MMM so that they perform all computation via fused multiply-add instructions (FMAs) and using this to reason about only the cost associated with reading the matrices, and (2) decoupling the per-segment I/O cost from the size of fast memory. For n × n matrices, the lower bound's leading-order term is 2n 3 / √ M. A theoretical algorithm whose leading terms a ains this is introduced. To what extent the state-of-the-art Goto's Algorithm a ains the lower bound is discussed.