Targeted MR/ultrasound (US) fusion biopsy is a technology made possible by overlaying ultrasound images of the prostate with MRI sequences for the visualization and the targeting of lesions. However, US and MR image registration requires a good initial alignment based on manual anatomical landmark detection or prostate segmentation, which are time-consuming and often challenging during an intervention. We propose to explicitly and automatically detect anatomical landmarks of prostate in both modalities to achieve initial registration. Firstly, we train a deep neural network to detect three anatomical landmarks for both MR and US images. Instead of relying on heatmap regression or coordinate regression using a fully connected layer, we regress coordinates of landmarks directly by introducing a differentiable layer in U-Net. After being trained and validated on 900 and 152 cases, the proposed method predicts landmarks within a Mean Radial Error (MRE) of 5.55 ± 2.63 mm and 5.77 ± 2.67 mm in 263 test cases for US and MR images, separately. Secondly, least-squares fitting is applied to calculate a rough rigid transformation based on detected anatomical landmarks. Surface registration error (SRE) of 6.62 ± 3.97 mm and Dice score of 0.77 ± 0.11 are achieved, which are both comparable metrics in clinical setting when comparing with previous method.