The use of 1-bit representation for network weights, as opposed to the conventional 32-bit, has been investigated to save on the required power and memory footprint. Squeeze-and-Excitation (SE) based channel attention techniques aim to further reduce the number of parameters by eliminating redundant channels. However, this approach leads to a significant drawback of an unstable and slow learning curve, especially when compared to fitting parameters in SE networks. To address this issue, this paper presents the first attempt to accelerate the learning curve, even with a 1-bit representation for weights across the entire Squeeze-and-Excitation Residual Network (SEResNet14). The proposed technique within the SE module significantly speeds up channel attention, yielding a steeper learning curve for the network. We also extensively investigate the impact of activation functions within the SE module, aiming to understand their performance-enhancing attributes when applied with the proposed technique. Experimental results demonstrate that even under stringent compression, an appropriate choice of activation function can still ensure the efficacy of our technique in the SE module. We found that the proposed technique results in: (1) a 60% reduction in the required number of epochs to achieve an error rate of 0.3; and (2) a decrease in the error rate by approximately 44% at the 10th epoch, compared to a baseline method that does not use the proposed scheme.