Beyond the consideration of the basic connection, this research focus on a new Neutral Network (NN) model, referred as PNN (Plasticity Neural Network) in which the current and mnemonic range of action between synapses and neurons plastic changes with iteration are both taken into account at the same time, specifically, the current and mnemonic synaptic range of action weights. In addition, the coding result on PNN mode is given that as one synapse strengthens, neighboring synapses weaken on their own to compensate it; this mechanism is consistent with the findings in the recent research on brain plasticity at the MIT Picower Institute. Regarding the importance of this mechanism, Dr. Luo's team at Stanford University has mentioned that the competition regarding synapse formation for dendritic morphogenesis is important. The influence of astrocyte impacts on brain plasticity and synapse formation is an important mechanism of our Neural Network at critical periods and the end of critical periods. We try to examine the mechanism of failure in brain plasticity by model at the end of critical periods in details by contrasting study before [17]. The new PNN model is not just modified on the frame of NN based on current gradient informational synapse formation and brain plasticity, but also the mnemonic gradient informational synapse formation and brain plasticity at critical periods. The mnemonic gradient information needs to consider forgotten memory-astrocytic synapse formation mnemonic factor. And mnemonic brain plasticity involves plus or minus disturbance-astrocytic brain plasticity disturbance factor. The influence of astrocyte made local synaptic range of action remains an appropriate length at critical periods. When one synapse transfers a signal from one neuron to other with a better stimulation signal, PNN will change the current synaptic range of action based on Mean Squared Error (MSE) of loss function, and vice versa for the same reason. For a given neuron, the synaptic plasticity of the connecting neurons from input to output units is enhanced and diminished with iteration. Considering the Recurrent Neural Network (RNN), each input variable corresponds to neurons sharing connection weights over a period of time interval; the weights are updated by the MSE loss of the activation function of the amount of output change within this synaptic range of action. The synaptic range of action is reflected in the time series, for which the real value of the simulation would lead to gradient-based change in the synaptic range of action by Back Propagation. The Back Propagation of PNN includes connection weights and synaptic range of action weights. In the simulation, the