An autonomic detection coordinator is developed in this paper, which constructs a multi-layered boundary to defend against host-based intrusive anomalies by correlating several observation-specific anomaly detectors. Two key observations facilitate the model formulation: First, different anomaly detectors have different detection coverage and blind spots; Second, diverse operating environments provide different kinds of information to reveal anomalies. After formulating the cooperation between basic detectors as a partially observable Markov decision process, a policygradient reinforcement learning algorithm is applied to search in an optimal cooperation manner, with the objective to achieve broader detection coverage and fewer false alerts. Furthermore, the coordinator's behavior can be adjusted easily by setting a reward signal to meet the diverse demands of changing system situations. A preliminary experiment is implemented, together with some comparative studies, to demonstrate the coordinator's performance in terms of admitted criteria.