Everyday visual search tasks require objects to be categorized according to behavioral goals. For example, when searching for an apple at the supermarket, one might first find the Granny Smith apples by separating all visible apples into the categories “green” and “non-green”. However, suddenly remembering that your family actually likes Fuji apples would necessitate reconfiguring the boundary to separate “red” from “red-yellow” objects. Despite this need for flexibility, prior research on categorization has largely focused on understanding neural changes related to overlearning a single category boundary that bifurcates an object space. At the same time, studies of feature-based attention have provided some insight into flexible selection of features, but have mainly focused on selection of a single, usually low-level, feature, which is rarely sufficient to capture the complexity of categorizing higher-dimensional object sets. Here we addressed these gaps by asking human participants to categorize novel shape stimuli according to different linear and non-linear boundaries, a task that requires dynamically reconfiguring selective attention to emphasize different sets of abstract features. Using fMRI and multivariate analyses of retinotopically-defined visual areas, we found that shape representations in visual cortex became more distinct across relevant category boundaries in a context-dependent manner, with the largest changes in discriminability observed for stimuli near the category boundary. Importantly, these attention-induced modulations were linked to categorization performance. Together, these findings demonstrate that adaptive attentional modulations can alter representations of abstract feature dimensions in visual cortex to optimize object separability based on currently relevant category boundaries.