In Internet of Things (IoT), the freshness of status updates is crucial for mission-critical applications. In this regard, it is suggested to quantify the freshness of updates by using Age of Information (AoI) from the receiver's perspective. Specifically, the AoI measures the freshness over time. However, the freshness in the content is neglected. In this paper, we introduce an agebased utility, named as Age of Changed Information (AoCI), which captures both the passage of time and the change of information content. By modeling the underlying physical process as a discrete time Markov chain, we investigate the AoCI in a time-slotted status update system, where a sensor samples the physical process and transmits the update packets to the destination. With the aim of minimizing the weighted sum of the AoCI and the update cost, we formulate an infinite horizon average cost Markov Decision Process. We show that the optimal updating policy has a special structure with respect to the AoCI and identify the condition under which the special structure exists. By exploiting the special structure, we provide a low complexity relative policy iteration algorithm that finds the optimal updating policy. We further investigate the optimal policy for two special cases. In the first case where the state of the physical process transits with equiprobability, we show that optimal policy is of threshold type and derive the closed-form of the optimal threshold. We then study a more generalized periodic Markov model of the physical process in the second case. Lastly, simulation results are laid out to exhibit the performance of the optimal updating policy and its superiority over the zero-wait baseline policy.