A prevalent method for nested named entity recognition involves enumerating all potential entity spans within a given sentence, considering these spans as potential entity types, and subsequently subjecting them to classification. Nevertheless, this span-centric approach traditionally undertakes the decoding of each entity span independently, neglecting the underlying semantic connections that exist between these spans, as well as the valuable information pertaining to the specific locations of the head and tail tokens within each span. Therefore, we introduced a bidirectional context-aware network specifically engineered to model and elucidate the semantic relationships that underpin these spans. Additionally, we improve the bi-affine mechanism by introducing rotational position coding to capture the relative position information between the head and tail markers. We carried out experiments on three nested datasets, producing convincing outcomes that firmly demonstrate the superior performance of our model compared to existing models, particularly in terms of F1 scores.