Federated learning is a distributed and privacy-preserving approach to train a statistical model collaboratively from decentralized data of different parties. However, when datasets of participants are not independent and identically distributed (non-IID), models trained by naive federated algorithms may be biased towards certain participants, and model performance across participants is non-uniform. This is known as the fairness problem in federated learning. In this paper, we formulate fairness-controlled federated learning as a dynamical multi-objective optimization problem to ensure fair performance across all participants. To solve the problem efficiently, we study the convergence and bias of Adam as the server optimizer in federated learning, and propose Adaptive Federated Adam (AdaFedAdam) to accelerate fair federated learning with alleviated bias. We validated the effectiveness, Pareto optimality and robustness of AdaFedAdam in numerical experiments and show that AdaFedAdam outperforms existing algorithms, providing better convergence and fairness properties of the federated scheme.Acceleration techniques for federated learning aim at reducing the communication cost and improving convergence. For instance, momentum-based and adaptive optimization methods such as AdaGrad, Adam, Momentum SGD) have been applied to accelerate the training process [11,28,26]). However, default hyperparameters of adaptive optimizers tuned for centralized training do not tend to perform well in federated settings ([26]). Furthermore, optimal hyperparameters are not generalizable for federated learning, and hyperparameter optimization with e.g. grid search are needed for each specific federated task, which is infeasible due to the expensive (and sometimes unbounded) nature for federated learning.