Compared with general object detection, the scale variations, arbitrary orientations, and complex backgrounds of objects in remote sensing images make it more challenging to detect oriented objects. Especially for oriented objects that have large aspect ratios, it is more difficult to accurately detect their boundary. Many methods show excellent performance on oriented object detection, most of which are anchor-based algorithms. To mitigate the performance gap between anchor-free algorithms and anchor-based algorithms, this paper proposes an anchor-free algorithm called dual-resolution and deformable multi-head network (DDMNet) for oriented object detection. Specifically, the dual-resolution network with bilateral fusion is adopted to extract high-resolution feature maps which contain both spatial details and multi-scale contextual information. Then, the deformable convolution is incorporated into the network to alleviate the misalignment problem of oriented object detection. And a dilated feature fusion module is performed on the deformable feature maps to expand their receptive fields. Finally, box boundary-aware vectors instead of the angle are leveraged to represent the oriented bounding box and the multi-head network is designed to get robust predictions. DDMNet is a single-stage oriented object detection method without using anchors and exhibits promising performance on the public challenging benchmarks. DDMNet obtains 90.49%, 93.25%, and 78.66% mean average precision (mAP) on the HRSC2016, FGSD2021, and DOTA datasets. In particular, DDMNet achieves 79.86% at mAP75 and 53.85% at mAP85 on the HRSC2016 dataset, respectively, outperforming the current state-of-the-art methods.