Precision characterization is fundamental to achieve expected performance in semiconductors where Moore's law pushes the boundaries to miniaturize components. To measure these attributes, deep learning models are used, which require manual annotation of several objects captured via electron microscopy. However, this annotation can be laborious and time-consuming. We propose a semi-automated method for annotating items in electron microscopy images, in an effort to be innovative, efficient, and reliable. Our approach involves identifying objects, enhancing boundaries with use of a unique loss function incorporating physical aspects from electron microscopy images. It greatly reduces the need for users to undertake the annotation model's training process. It also minimizes post-inference processing by delivering a ready-to-use model. The constrained dynamic match loss (C-DML) incorporates dynamic matching with horizontal/vertical symmetry constraints to address the distinct challenges presented by manufactured objects acquired by microscopy imaging. Metrology metrics from the contour predictions obtained with C-DML obtain a mean relative error (MRE) of <10% and a correlation coefficient surpassing 90% when compared with ground truth corresponding to manual annotations. Our experimental results demonstrate the superior performance of C-DML over both classical DML and state-of-the-art deep annotation models. An extensive investigation demonstrates the effectiveness of our approach on heterogeneous datasets, including diverse objects of different materials and shapes, leading to state-of-the-art measurement results. Additionally, we show with the experiments that we can obtain better performances with better hyperparameters and data augmentation. Furthermore, this investigation presents a technique for annotating electron microscopy images efficiently and sheds light on the essential elements that dictate the approach's overall efficacy.