In smart transportation, assisted driving relies on data integration from various sensors, notably LiDAR and cameras. However, their optical performance can degrade under adverse weather conditions, potentially compromising vehicle safety. Millimeter-wave radar, which can overcome these issues more economically, has been re-evaluated. Despite this, developing an accurate detection model is challenging due to significant noise interference and limited semantic information. To address these practical challenges, this paper presents the TC–Radar model, a novel approach that synergistically integrates the strengths of transformer and the convolutional neural network (CNN) to optimize the sensing potential of millimeter-wave radar in smart transportation systems. The rationale for this integration lies in the complementary nature of CNNs, which are adept at capturing local spatial features, and transformers, which excel at modeling long-range dependencies and global context within data. This hybrid approach allows for a more robust and accurate representation of radar signals, leading to enhanced detection performance. A key innovation of our approach is the introduction of the Cross-Attention (CA) module, which facilitates efficient and dynamic information exchange between the encoder and decoder stages of the network. This CA mechanism ensures that critical features are accurately captured and transferred, thereby significantly improving the overall network performance. In addition, the model contains the dense information fusion block (DIFB) to further enrich the feature representation by integrating different high-frequency local features. This integration process ensures thorough incorporation of key data points. Extensive tests conducted on the CRUW and CARRADA datasets validate the strengths of this method, with the model achieving an average precision (AP) of 83.99% and a mean intersection over union (mIoU) of 45.2%, demonstrating robust radar sensing capabilities.