Self-triggered control (STC) is a sample-and-hold control method aimed at reducing communications within networked-control systems; however, existing STC mechanisms often maximize how late the next sample is, and as such they do not provide any sampling optimality in the longterm. In this work, we devise a method to construct selftriggered policies that provide near-maximal average intersample time (AIST) while respecting given control performance constraints. To achieve this, we rely on finite-state abstractions of a reference event-triggered control, in which early triggers are also allowed. These early triggers constitute controllable actions of the abstraction, for which an AIST-maximizing strategy can be computed by solving a mean-payoff game. We provide optimality bounds, and how to further improve them through abstraction refinement techniques.