Individuals with obesity have larger amounts of visceral (VAT) and subcutaneous adipose tissue (SAT) in their body, increasing the risk for cardiometabolic diseases. The reference standard to quantify SAT and VAT uses manual annotations of magnetic resonance images (MRI), which requires expert knowledge and is time-consuming. Although there have been studies investigating deep learning-based methods for automated SAT and VAT segmentation, the performance for VAT remains suboptimal (Dice scores of 0.43 to 0.89). Previous work had key limitations of not fully considering the multicontrast information from MRI and the 3D anatomical context, which are critical for addressing the complex spatially varying structure of VAT. An additional challenge is the imbalance between the number and distribution of pixels representing SAT/VAT. This work proposes a network based on 3D U-Net that utilizes the full field-of-view volumetric T1-weighted, water, and fat images from dual-echo Dixon MRI as the multi-channel input to automatically segment SAT and VAT in adults with overweight/obesity. In addition, this work extends the 3D U-Net to a new Attention-based Competitive Dense 3D U-Net (ACD 3D U-Net) trained with a class frequencybalancing Dice loss (FBDL). In an initial testing dataset, the proposed 3D U-Net and ACD 3D U-Net with FBDL achieved 3D Dice scores of (mean ± standard deviation) 0.99±0.01 and 0.99±0.01 for SAT, and 0.95±0.04 and 0.96±0.04 for VAT, respectively, compared to manual annotations. The proposed 3D networks had rapid inference time (<60 ms/slice) and can enable automated segmentation of SAT and VAT.Clinical relevance-This work developed 3D neural networks to automatically, accurately, and rapidly segment visceral and subcutaneous adipose tissue on MRI, which can help to characterize the risk for cardiometabolic diseases such as diabetes, elevated glucose levels, and hypertension.