The foundation model (FM) has garnered significant attention for its remarkable transfer performance in downstream tasks. Typically, it undergoes task-agnostic pre-training on a large dataset and can be efficiently adapted to various downstream applications through fine-tuning. While FMs have been extensively explored in language and other domains, their potential in remote sensing has also begun to attract scholarly interest. However, comprehensive investigations and performance comparisons of these models on remote sensing tasks are currently lacking. In this survey, we provide essential background knowledge by introducing key technologies and recent developments in FMs. Subsequently, we explore essential downstream applications in remote sensing, covering classification, localization, and understanding. Our analysis encompasses over thirty FMs in both natural and remote sensing fields, and we conduct extensive experiments on more than ten datasets, evaluating global feature representation, local feature representation, and target localization. Through quantitative assessments, we highlight the distinctions among various foundation models and confirm that pre-trained largescale natural FMs can also deliver outstanding performance in remote sensing tasks. After that, we systematically presented a brain-inspired framework for remote sensing foundation models (RSFMs). We delve into the brain-inspired characteristics in this framework, including structure, perception, learning, and cognition. To conclude, we summarize twelve open problems in RSFMs, providing potential research directions. Our survey offers valuable insights into the burgeoning field of RSFMs and aims to foster further advancements in this exciting area.