Object-based audio presents the opportunity to optimize audio reproduction for different listening scenarios. Vector base amplitude panning (VBAP) is typically used to render objectbased scenes. Optimizing this process based on knowledge of the perception and practices of experts could result in significant improvements to the end user's listening experience. An experiment was conducted to investigate how content creators perceive changes in the perceptual attributes of the same content rendered to systems with different numbers of channels, and to determine what they would do differently to standard VBAP and matrix based downmixes to minimize these changes. Text mining and clustering of the content creators' responses revealed six general mix processes: the spatial spread of individual objects, EQ and processing, reverberation, position, bass, and level. Logistic regression models show the relationships between the mix processes, perceived changes in perceptual attributes, and the rendering method/speaker layout. The relative frequency of use for the different mix processes was found to differ between categories of audio objects suggesting that any downmix rules should be object category specific. These results give insight into how object-based audio can be used to improve listener experience and provide the first template for doing this across different reproduction systems.