Recognizing applied hand forces using force myography (FMG) biosignals requires adequate training data to facilitate physical human-robot interactions (pHRI). But in practice, data is often scarce, and labels are usually unavailable or time consuming to generate. Synthesizing FMG biosignals can be a viable solution. Therefore, in this paper, we propose for the first time a dual-phased algorithm based on semisupervised adversarial learning utilizing fewer labeled real FMG data with generated unlabeled synthetic FMG data. We conducted a pilot study to test this algorithm in estimating applied forces during interactions with a Kuka robot in 1D-X, Y, Z directions. Initially, an unsupervised FMG-based deep convolutional generative adversarial network (FMG-DCGAN) model was employed to generate real-like synthetic FMG data. A variety of transformation functions were used to observe domain randomization for increasing data variability and for representing authentic physiological, environmental changes. Cosine similarity score and generated-to-input-data ratio were used as decision criteria minimizing the reality gap between real and synthetic data and helped avoid risks associated with wrong predictions. Finally, the FMG-DCGAN model was pretrained to generate pseudo-labels for unlabeled real and synthetic data, further retrained using all labeled and pseudo-labeled data and was termed as the self-trained FMG-DCGAN model. Lastly, this model was evaluated on unseen real test data and achieved accuracies of 85%>R 2 >77% in force estimation compared to the corresponding supervised baseline model (89%>R 2 >78%). Therefore, the proposed method can be more practical for use in FMG-based HRI, rehabilitation, and prosthetic control for daily, repetitive usage even with few labeled data.