Even in relatively simple settings, model misspecification can make the application and interpretation of Bayesian inference difficult. One approach to make Bayesian analyses fit-for-purpose in the presence of model misspecification is the use of cutting feedback methods. These methods modify conventional Bayesian inference by limiting the influence of one part of the model by "cutting" the link between certain components. We examine cutting feedback methods in the context of generalized posterior distributions, i.e., posteriors built from arbitrary loss functions, and provide novel results on their behaviour. A direct product of our results are diagnostic tools that allow for the quick, and easy, analysis of two key features of cut posterior distributions: one, how uncertainty about the model unknowns in one component impacts inferences about unknowns in other components; two, how the incorporation of additional information impacts the cut posterior distribution.