Privacy and trust are highly demanding in practical recommendation engines. Although Federated Learning (FL) has significantly addressed privacy concerns, commercial operators are still worried about several technical challenges while bringing FL into production. Additionally, classical FL has several intrinsic operational limitations such as single-point failure, data and model tampering, and heterogenic clients participating in the FL process. To address these challenges in practical recommenders, we propose a responsible recommendation generation framework based on blockchain-empowered asynchronous FL that can be adopted for any model-based recommender system. In standard FL settings, we build an additional aggregation layer in which multiple trusted nodes guided by a mediator component perform gradient aggregation to achieve an optimal model locally in a parallel fashion. The mediator partitions users into
K
clusters, and each cluster is represented by a cluster head. Once a cluster gets semi-global convergence, the cluster head transmits model gradients to the FL server for global aggregation. Additionally, the trusted cluster heads are responsible to submit the converged semi-global model to a blockchain to ensure tamper resilience. In our settings, an additional mediator component works like an independent observer that monitors the performance of each cluster head, updates a reward score, and records it into a digital ledger. Finally, evaluation results on three diversified benchmarks illustrate that the recommendation performance on selected measures is considerably comparable with the standard and federated version of a well-known neural collaborative filtering recommender.