Dynamic channel modelling allows communication interfaces to integrate continuous learning operations for incremental BER reductions. These models scan temporal BER patterns, and then tune internal-channel parameters in order to improving communication efficiency under real-time traffic scenarios. But these models showcase high complexity, thus cannot be scaled to large-scale network deployments. Moreover, these models are not flexible, and do not support denser channel models, which restricts their applicability under real-time scenarios. To overcome these issues, this text proposes design of a novel dynamic learning method for improved channel modelling in Phased array antennas mm Wave radios via temporal breakpoint analysis. The model initially collects information about channel BER and uses a Grey Wolf Optimization (GWO) technique to improve its internal model parameters. These parameters are further tuned via a novel breakpoint model, which enables for continuous and light-weighted tuning of channel modelling parameters. This allows the model to incrementally reduce BER even under denser noise levels. The model is further cascaded with a Q-Learning based optimization process, which assists in improving channel modelling efficiency for large-scale networks. Due to these integrations, the model is capable of reducing Bit Error Rate (BER) by 8.3% when compared with standard channel modelling techniques that use Convolutional Neural Networks (CNNs), Sparse Bayesian Learning, etc. These methods were selected for comparison due to their higher efficiency and scalability when applied to real-time communication scenarios. The model also showcased 6.5% lower computational delay due to linear processing operations. It was able to achieve 10.4% better channel coverage, 8.5% higher throughput, and 4.9% higher channel estimation accuracy, which makes it useful for a wide variety of real-time network deployments.