This article constructs and studies a graphic design software model based on data-driven image interactions. The semantic information of image interaction is integrated into the framework of the augmented reality label placement algorithm. A new feature map, Guidance Map, is proposed to combine image saliency information with semantic information in the task scenario, accurately describing the importance of different regions in the user’s field of view to derive a more reasonable label placement strategy. This database is different from other label placement algorithms that only use data-driven saliency detection. This paper proposes a data-driven augmented reality labeling method, which first presents the design requirements for augmented reality scene labeling placement, then designs the energy function for augmented reality scene characteristics, transforms the labeling placement problem into an optimization problem, and solves it. This paper proposes a feature representation form oriented to pixels, elements, interelement relationships, planes, and applications and quantifies the geometric features, perceptual features, and stylistic features in them. The model can fit the probability density distribution of the parts to predict the optimal element placement and element color under the target conditions.