Internal representations of continuous variables are crucial to create internal models of the external world. A prevailing model of how brain maintains these representations is given by continuous bump attractor networks (CBANs). CBANs have been hypothesized as an underlying mechanism in a broad range of brain functions across different areas, such as spatial navigation in hippocampal/entorhinal circuits and working memory in prefrontal cortex. Through recurrent connections, a CBAN maintains a persistent activity bump, whose peak location can vary along a neural space, corresponding to different values of a continuous variable. To track the value of a continuous variable changing over time, a CBAN updates the location of its activity bump based on inputs that encode the changes in the continuous variable (e.g., movement velocity in the case of spatial navigation) -- a process akin to mathematical integration. This integration process is not perfect and accumulates error over time. For error correction, CBANs can use additional inputs providing ground-truth information about the continuous variable's correct value (e.g., visual landmarks for spatial navigation). These inputs enable the network dynamics to automatically correct any representation error by shifting the activity bump toward the correct location. Recent experimental work on hippocampal place cells has shown that, beyond correcting errors, ground-truth inputs (e.g., visual landmarks) also fine-tune the gain of the integration process, a crucial factor that links the change in the continuous variable (e.g., movement velocity) to the updating of the activity bump's location. However, existing CBAN models lack this plasticity, offering no insights into the neural mechanisms and representations involved in the recalibration of the integration gain. In this paper, we explore this gap by using a ring attractor network, a specific type of CBAN, to model the experimental conditions that demonstrated gain recalibration in hippocampal place cells. Our analysis reveals the neural mechanisms behind gain recalibration within a CBAN: Unlike error correction, which occurs through network dynamics based on ground-truth inputs, gain recalibration requires an additional neural signal that explicitly encodes the error in the network's representation. This error signal must be provided by some neurons whose firing rate varies monotonically with respect to one of two signals---either the instantaneous error or the time-integral of the error---for recalibration of the integration gain. Finally, we propose a modified ring attractor network as an example CBAN model that verifies our theoretical findings. Combining an error-rate code with Hebbian synaptic plasticity, this model achieves recalibration of integration gain in a CBAN, ensuring accurate representation for continuous variables.