In the field of robot path planning, Deep Reinforcement Learning (DRL) has demonstrated considerable potential as a cutting-edge artificial intelligence technology. However, the effective utilization of representation learning in path planning tasks, which is pivotal for successful DRL performance, has remained elusive. This challenge arises from the predominant use of compact vectors derived directly from low-level sensors as the state representation in the task. Learning meaningful representations on such low-level states often proves to be challenging. To address this issue, a novel approach named Contrastive Learning Regularized Feature-Enhanced Actor-Critic (CFEAC) is proposed in this paper. This method adopts a contrastive learning perspective to handle features in neural networks and incorporates cross-layer connections and deep networks to achieve feature enhancement. In a constructed 3D point cloud simulation environment, the CFEAC algorithm outperforms DDPG, TD3, SAC, and SALE algorithms in terms of higher cumulative reward and lower collision rates. Experimental results validate that this approach exhibits superior path planning performance in complex static and dynamic scenarios.