In this paper, we analyze the optimization landscape of the gradient descent method for static output feedback (SOF) control of discrete-time linear time-invariant systems with quadratic cost. The SOF setting can be quite common, for example, when there are unmodeled hidden states in the underlying process. We first identify several important properties of the SOF cost function, including coercivity, L-smoothness, and M -Lipschitz continuous Hessian. Based on these results, we show that when the observation matrix has full column rank, gradient descent is able to converge to the global optimal controller at a linear rate. For the partially observed case, convergence to stationary points is obtained in general and the corresponding convergence rate is characterized. In the latter more challenging case, we further prove that under some mild conditions, gradient descent converges linearly to a local minimum if the starting point is close to one. These results not only characterize the performance of gradient descent in optimizing the SOF problem, but also shed light on the efficiency of general policy gradient methods in reinforcement learning.