Motivated by the fact that optimizing quantization error fails to minimize the accuracy degradation caused by the quantization of Neural Network (NN) weights, in this paper, we highlight the need for a comprehensive analysis of post-training quantization, covering average bit rate, accuracy degradation and SQNR. One such analysis, for the layer-wise uniform quantization and its application is NN weight compression is presented in this paper. We introduce an additional aspect of Uniform Quantizer (UQ) choice, and we allow the selection of one of two UQs (2-bit or 3-bit UQs) for each layer of NN. However, choosing a particular bit rate allocation to minimize average bit rate and accuracy degradation requires finding a trade-off solution. To solve the issue underpinning our motivation, for a post-training layer-wise uniform quantization framework with two UQs, we propose an algorithm for generating bit rate allocation patterns and corresponding pairs of average bit rate and accuracy degradation. We analyze the output of this algorithm and determine a solution of the elaborated two-objective optimization problem. To confirm the validity of the results we also apply the Pareto dominance method. This work can be considered a solid foundation for further analyses of more complex and deeper NNs.