<p>In the realm of Deep Learning (DL), optimization of hyperparameters like learning rate has been one of the well-known challenges. To address this problem, previous works have introduced Learning Rate Scheduling (LRS) and Adaptive Learning Rate (ALR) algorithms that used different learning rates at distinct phases of the training process to enhance the performance of DL models. But, these algorithms exhibit their own set of drawbacks. Most of the LRS techniques do not consider the learning behavior of those models while determining the learning rate. On the other hand, standard ALR algorithms such as RMSprop, Adam, and their variants optimize the training by consid- ering the local gradient patterns. However, these patterns are insufficient to determine the training efficiency when multiple local optima exist concentrated around the global optima. In this context, the proposed work introduces a new ALR method known as the Learning Rate Tuner with Relative Adaptation (LRT-RA), which decides the learning rate per training iteration based on the global loss curve. Lastly, the proposed algorithm (1) prevents a premature convergence of the training process and (2) intends to remove the cross-validation overhead to determine the precise learning rates required for the model’s training. </p>