Deep neural networks are very compelling for medical image segmentation. However, deep models often suffer from notable performance drops in real clinical settings due to the complex appearance shift in daily scannings. Domain adaptation partially addresses the problem between imaging domains. However, it heavily depends on the expensive recollection and retraining for domain-specific datasets and thus is not applicable to domain-agnostic images. In this paper, we propose a case adaptation strategy aiming to bridge the segmentation performance gap on domain-agnostic images. Our contribution is threefold. First, we design a general self-supervised learning framework for case adaptation, which exploits its predictions as supervision to drive the adaptation. Without extra annotations and any burden on model complexity, the framework enables trained deep models at-hand to directly segment domain-agnostic testing images. Second, we propose a novel Evolving Shape Prior (ESP) which recursively introduces strong shape knowledge into networks and evolves with the adaptation procedure to provide adaptive supervision. ESP can stabilize self-supervised learning and guide it to move towards model convergence. Third, we perform extensive experiments on 10 datasets with different levels of difficulty and typical appearance shifts blended, proving our framework is a promising solution in reducing segmentation performance degradation. Through this work, we investigate the feasibility of case adaptation as a general strategy in enhancing the robustness of deep segmentation networks, with comprehensive analyses proving its efficacy and efficiency.