Cloth-changing person re-identification (Re-ID) is an emerging research theme that aims at identifying individuals after clothing change. Many contemporary approaches focus on disentangling clothing features and solely employ clothing-unrelated parts for identification. However, the absence of ground truth poses a significant challenge to the disentanglement process, these methods may introduce unintended noise and degrade the overall performance. To mitigate this issue, we propose a novel framework, termed Attention-based Controllable Disentanglement Network (ACD-Net). In ACD-Net, we design an Attention-enhanced Disentanglement Branch (ADB) where human parsing masks are introduced to guide the separation of clothing features and clothing-unrelated features. Here, clothing-unrelated features are subdivided into unclothed body features and contour features, and we propose two novel attention mechanisms: Dynamic Interaction-Remote Aggregation Attention (DI-RAA) and Dynamic Interaction-Positional Relevance Attention (DI-PRA) to enhance the representations of these two features, respectively. Experimental results on PRCC, LTCC, DeepChange, and CCVID datasets demonstrate the superiority of our approach over the state-of-the-art methods. For the cloth-changing setting, the mAP of our network on PRCC, LTCC, and DeepChangedatasets are 59.5%, 22.6%, and 20.6%, and the Rank-1 are 60.6%, 45.5%, and 56.8%, respectively. In addition, our model also obtains 81.5% of mAP and 83.4% of Rank-1 on the video dataset CCVID. The code is available at: https://github.com/jk-love-ge/ACDNet.