Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible.In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e., without any interaction), which allows to avoid the self-destruct mechanism in some applications. Additionally, the refreshing procedure can be exploited in order to obtain security against continual leakage attacks. We give an abstract framework for building refreshable continuously non-malleable codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption.Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient read-only RAM programs. In comparison to other tamper-resilient RAM compilers, ours has several advantages, among which the fact that, in some cases, it does not rely on the self-destruct feature.Unfortunately, we cannot generalize the above proof strategy to multiple rounds. Indeed, Faust et al. exploit the fact that the leakage-resilient storage scheme remains secure even when the adversary is allowed to obtain one half of the encoding in full. Clearly, after that, the adversary is not allowed to leak further from the other half of the codeword. In our case, we would need to repeat the above trick again and again, in particular after each decoding error (and subsequent refresh of the target encoding) happens; however, once the reduction obtains one half of the codeword it cannot ask leakage queries anymore, so that it is unclear how to complete the proof. We give a solution to this problem by relying on a simple informationtheoretic observation.Let (X 0 , X 1 ) be two random variables, and consider a process that interleaves the computation of a sequence of leakage functions g 1 , g 2 , g 3 , . . . from X 0 and from X 1 . The process continues until, for some index i ∈ N, we have that g i (X 0 ) = g i (X 1 ). We claim that g i (X 0 ) := g 1 (X 0 ), g 2 (X 0 ), · · · , g i−1 (X 0 ) do not reveal more information about X 0 than what ...