Fast covariance calculation is required both for SLAM (e.g. in order to solve data association) and for evaluating the information-theoretic term for different candidate actions in belief space planning (BSP). In this paper we make two primary contributions. First, we develop a novel general-purpose incremental covariance update technique, which efficiently recovers specific covariance entries after any change in the inference problem, such as introduction of new observations/variables or re-linearization of the state vector. Our approach is shown to recover them faster than other state-of-the-art methods. Second, we present a computationally efficient approach for BSP in high-dimensional state spaces, leveraging our incremental covariance update method. State of the art BSP approaches perform belief propagation for each candidate action and then evaluate an objective function that typically includes an information-theoretic term, such as entropy or information gain. Yet, candidate actions often have similar parts (e.g. common trajectory parts), which are however evaluated separately for each candidate. Moreover, calculating the information-theoretic term involves a costly determinant computation of the entire information (covariance) matrix which is O(n 3 ) with n being dimension of the state or costly Schur complement operations if only marginal posterior covariance of certain variables is of interest. Our approach, rAMDL-Tree, extends our previous BSP method rAMDL (Kopitkov and Indelman, 2017), by exploiting incremental covariance calculation and performing calculation re-use between common parts of non-myopic candidate actions, such that these parts are evaluated only once, in contrast to existing approaches. To that end, we represent all candidate actions together in a single unified graphical model, which we introduce and call a factor-graph propagation (FGP) action tree. Each arrow (edge) of the FGP action tree represents a sub-action of one (or more) candidate action sequence and in order to evaluate its information impact we require specific covariance entries of an intermediate belief represented by tree's vertex from which the edge is coming out (e.g. tail of the arrow). Overall, our approach has only a one-time calculation that depends on n, while evaluating action impact does not depend on n. We perform a careful examination of our approaches in simulation, considering the problem of autonomous navigation in unknown environments, where rAMDL-Tree shows superior performance compared to rAMDL, while determining the same best actions.Then, the IG of each sub-action is equal to:and sum of these IGs is equal to: