Online decision-making can be formulated as the popular stochastic multi-armed bandit problem where a learner makes decisions (or takes actions) to maximize cumulative rewards collected from an unknown environment. A specific variant of the bandit problem is the non-stationary stochastic multiarmed bandit problem, where the reward distributions-which are unknown to the learner-change over time. This paper proposes to model non-stationary stochastic multi-armed bandits as an unknown stochastic linear dynamical system as many applications, such as bandits for dynamic pricing problems or hyperparameter selection for machine learning models, can benefit from this perspective. Following this approach, we can build a matrix representation of the system's steady-state Kalman filter that takes a set of previously collected observations from a time interval of length s to predict the next reward that will be returned for each action. This paper proposes a solution in which the parameter s is determined via an adaptive algorithm by analyzing the model uncertainty of the matrix representation. This algorithm helps the learner adaptively adjust its model size and its length of exploration based on the uncertainty of its environmental model. The effectiveness of the proposed scheme is demonstrated through extensive numerical studies, revealing that the proposed scheme is capable of increasing the rate of collected cumulative rewards.