Real-world deployment of Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) often faces challenges due to the computational demands of Convolutional Neural Networks (CNNs). This paper proposes an innovative solution, combining Iterative Transfer Learning (ITL) with a lightweight branched-CNN architecture, to address these limitations. The proposed approach cleverly decomposes the multi-class classification problem into smaller, binary subtasks. Each branch in the network, consisting of specialized Fully Connected (FC) layers, acts as an expert in identifying a specific target class. These branches are trained sequentially, focusing on one class at a time using a One-vs-All (OVA) strategy. This simplification reduces the model's complexity, enabling efficient performance even with a smaller CNN. Furthermore, the branched architecture significantly alleviates the need for a large labeled dataset. By dividing the problem into binary tasks, the model learns effectively even with limited data, making it suitable for resource-constrained scenarios. During inference, the branch with the highest output probability determines the final target class. The model's performance was adjusted by meticulous hyperparameter tuning of batch size, learning rate, and number of epochs, resulting in exceptional accuracy on the MSTAR dataset. Featuring a mere 0.2 million parameters and 0.2 million Multiply-Accumulate Operations (MACCs), it achieves an impressive accuracy of 98.48% under standard conditions. This model performs better than DenseNet-161, a substantially bigger model with 130 times more parameters and nearly 1000 times more MACCs. Furthermore, the model consistently achieved accuracies of 97.83%, 98.15%, and 98.53% across diverse operating conditions, solidifying its potential for SAR applications.INDEX TERMS Automatic target recognition, computer vision, convolutional neural network, hyperparameter tuning, synthetic aperture radar.