Convolutional neural networks (CNN) offer state-of-the-art performance in various computer vision tasks such as activity recogni tion, face detection, medical image analysis, among others. Many of those tasks need invariance to image transformations (i.e.. rotations, transla tions or scaling). This work proposes a versatile, straightforward and interpretable mea sure to quantify the (in)variance of CNN activations with respect to transformations of the input. Intermediate output values of feature maps and fully connected layers are also analyzed with respect to different input transformations. The technique is applicable to any type of neu ral network and/or transformation. Our technique is validated on ro tation transformations and compared with the relative (in)variance of several networks. More specifically, ResNet, AllConvolutional and VGG architectures were trained on CIFAR10 and MNIST databases with and without rotational data augmentation. Experiments reveal that rotation (in)variance of CNN outputs is class conditional. A distribution analysis also shows that lower layers are the most invariant, which seems to go against previous guidelines that recommend placing invariances near the network output and equivariances near the input.