Conjoint experiments offer a flexible way to elicit population preferences on complex decision tasks. We investigate whether we can improve respondents’ survey experience and, ultimately, choice quality by departing from the current recommendation of completely randomized conjoint attribute ordering. Such random ordering guarantees that potential bias from attribute order cancels out on average. However, in situations with many attributes, this may unnecessarily increase cognitive burden, as attributes belonging together conceptually are presented scattered across the choice table. Hence, we study experimentally whether purposeful ordering (“theoretically important” attributes first) or block randomized ordering (attributes belonging to the same theoretical concept displayed in randomized bundles) affects survey experience, response time, and choice itself, as compared to completely randomized ordering. Drawing on a complex preregistered choice design with nine attributes (N = 6,617), we find that ordering type affects neither self-reported survey experience, choice task timing, nor attribute weighting. Potentially, block randomization reduces cognitive burden for some subgroups. To our knowledge, we thereby provide the first systematic empirical evidence that ordering effects are likely of low relevance in conjoint choice experiments and that the trade-off between cognitive burden and ordering effects is minimal from the perspective of respondents, at least for our substantive application.