Anatomical landmark detection is a critical task in medical image analysis with significant research and practical value. The analysis of landmarks in radiological images is beneficial for its diagnosis and treatment. Currently, most methods are applied to datasets from specific anatomical regions, with only a few deep neural network models designed for mixed datasets. Meanwhile, the precise and fast localization of landmarks remains a significant challenge due to interdependencies among landmarks and the demanding accuracy requirements in medical clinical applications. In this work, we leverage the advantages of attention mechanisms for focusing on specific information, and propose a universal landmark detection model trained on mixed‐domain radioactive image datasets. The model consists of three main components: the local network, the global network, and the attention feature extraction module. The local network employs a U‐Net architecture with depthwise separable convolutions to extract local image features. Notably, we introduce partial depthwise separable convolutions as an option for further lightweight models. The global network comprises a sequence of dilated convolutions with varying dilation rates, expanding the receptive field to extract global features. The attention feature extraction module efficiently utilizes intermediate outputs to refine the precise location information of landmarks. Additionally, we introduce the regional attention loss function to provide the model with landmark offset information. We conduct experiments on three publicly available x‐ray image datasets covering head, hand, and chest regions. The experimental results demonstrate that the proposed universal model achieves fast convergence and excellent performance across multiple datasets.