Two fundamental and prominent methods for multi-label classification, Binary Relevance (BR) and Classifier Chains (CC), are usually considered to be distinct methods without direct relationship. However, BR and CC can be generalized to one single method: blockwise classifier chains (BCC), where labels within a block (i.e. a group of labels of fixed size) are predicted independently as in BR but then combined to predict the next block's labels as in CC. In other words, only the blocks are connected in a chain. BR is then a special case of BCC with a block size equal to the number of labels, and CC a special case with a block size equal to one. The rationale behind BCC is to limit the propagation of errors made by inaccurate classifiers early in the chain, which should be alleviated by the expected block effect. Another, yet different generalization is based on the divideand-conquer principle, not error propagation, but fails to exhibit the desired block effect. Ensembles of BCC are also discussed and experiments confirm that their performance is on par with ensembles of CC. Further experiments show the effect of the block size, in particular with respect to the performance of the two extremes, BR and CC. As it turns out, some regions of the block size parameter space lead to degraded performance, whereas others improve performance to a noticeable but modest extent.