Deep learning algorithms will play a key role in the upcoming runs of the Large Hadron Collider (LHC), helping bolster various fronts ranging from fast and accurate detector simulations to physics analysis probing possible deviations from the Standard Model. The game-changing feature of these new algorithms is the ability to extract relevant information from high-dimensional input spaces, often regarded as “replacing the expert” in designing physics-intuitive variables. While this may seem true at first glance, it is far from reality. Existing research shows that physics-inspired feature extractors have many advantages beyond improving the qualitative understanding of the extracted features. In this review, we systematically explore automatic feature extraction from a phenomenological viewpoint and the motivation for physics-inspired architectures. We also discuss how prior knowledge from physics results in the naturalness of the point cloud representation and discuss graph-based applications to LHC phenomenology.