Abstract-Metaheuristic optimization procedures such as Evolutionary Algorithms are usually driven by an objective function which rates the quality of a candidate solution. However, it is not clear in practice whether an objective function adequately rewards intermediate solutions on the path to the global optimum and it may exhibit deceptiveness, epistasis, neutrality, ruggedness, and a lack of causality. In this paper, we introduce the Frequency Fitness H, subject to minimization, that rates how often solutions with the same objective value have been discovered so far. The ideas behind this method are that good solutions are hard to find and that if an algorithm gets stuck at a local optimum, the frequency of the objective values of the surrounding solutions will increase over time, which will eventually allow it to leave that region again. We substitute a Frequency Fitness Assignment process (FFA) for the objective function into several different optimization algorithms. We conduct a comprehensive set of experiments: the synthesis of algorithms with Genetic Programming (GP), the solution of MAX-3SAT problems with Genetic Algorithms, classification with Memetic Genetic Programming, and numerical optimization with a (1+1) Evolution Strategy, in order to verify the utility of FFA. Given that they have no access to the original objective function at all, it is surprising that for some problems (e.g., the algorithm synthesis task) the FFAbased algorithm variants perform significantly better. However, this cannot be guaranteed for all tested problems. We thus also analyze scenarios where algorithms using FFA do not perform better than with the original objective functions.