The integration of set-valued ordered rough set models and incremental learning signifies a progressive advancement of conventional rough set theory, with the objective of tackling the heterogeneity and ongoing transformations in information systems. In set-valued ordered decision systems, when changes occur in the attribute value domain, such as adding or removing conditional values, it may result in changes in the preference relation between objects, indirectly leading to changes in approximations. In this paper, we effectively address the issue of updating approximations that arise from adding or removing conditional values in set-valued ordered decision systems. Firstly, we classify the research objects into two categories: objects with changes in conditional values and objects without changes, and then conduct theoretical studies on updating approximations for these two categories, presenting approximation update theories for adding or removing conditional values. Subsequently, we present incremental algorithms corresponding to approximation update theories. We demonstrate the feasibility of the proposed incremental update method with numerical examples and show that our incremental algorithm outperforms the static algorithm. Ultimately, by comparing experimental results on different datasets, it is evident that the incremental algorithm efficiently reduces processing time. In conclusion, this study offers a promising strategy to address the challenges of set-valued ordered decision systems in dynamic environments.