Existing model poisoning attacks on federated learning (FL) assume that an adversary has access to the full data distribution. In reality, an adversary usually has limited prior knowledge about clients' data. A poorly chosen target class renders an attack less effective. This work considers a semitargeted situation where the source class is predetermined but the target class is not. The goal is to cause the misclassification of the global classifier on data from the source class. Approaches such as label flipping have been used to inject malicious parameters into FL. Nevertheless, it has shown that their performances are usually class-sensitive, varying with different target classes. Typically, an attack becomes less effective when shifting to a different target class. To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) that enhances model poisoning in FL by finding the optimized target class in the feature space. ADA deduces pair-wise class attacking distances using a Fast LAyer gradient MEthod (FLAME). Extensive evaluations were performed on five benchmark image classification tasks and three model architectures using varying attacking frequencies. Furthermore, ADA's robustness to conventional defenses of Byzantine-robust aggregation and differential privacy was validated. The results showed that ADA succeeded in increasing attack performance to 2.8 times in the most challenging case with an attacking frequency of 0.01 and bypassed existing defenses, where differential privacy that was the most effective defense still could not reduce the attack performance to below 50%.Impact Statement-Model poisoning on federated learning (FL) causes client models to be compromised by malicious model parameter sharing. Though FL extends the attacking surface of the attacker involving lots of clients, the model aggregation that combines different clients' model parameters, can greatly reduce poisoning attack's effect. Different from previous studies that adopt an arbitrary target class to mount an attack, this work proposes a novel semi-targeted model poisoning attack that adaptively computes the optimized attacking target depending on input samples. Such an attack could immensely enhance model poisoning's efficacy in FL, improving its robustness against model aggregation. The empirical result showed that the proposed method achieved great performance even with a low attacking frequency, generalizing across different distribution spaces and model architectures. In addition, existing defenses in FL were found to be ineffective in alleviating the semi-targeted attack.