Recent work in equivariant deep learning bears strong similarities to physics. Fields over a base space are fundamental entities in both subjects, as are equivariant maps between these fields. In deep learning, however, these maps are usually defined by convolutions with a kernel, whereas they are partial differential operators (PDOs) in physics. Developing the theory of equivariant PDOs in the context of deep learning could bring these subjects even closer together and lead to a stronger flow of ideas. In this work, we derive a G-steerability constraint that completely characterizes when a PDO between feature vector fields is equivariant, for arbitrary symmetry groups G. We then fully solve this constraint for several important groups. We use our solutions as equivariant drop-in replacements for convolutional layers and benchmark them in that role. Finally, we develop a framework for equivariant maps based on Schwartz distributions that unifies classical convolutions and differential operators and gives insight about the relation between the two. However, one remaining difference is that physics uses equivariant partial differential operators (PDOs) to define maps between fields, such as the gradient or Laplacian. Therefore, using PDOs instead of convolutions in deep learning would complete the analogy to physics and could lead to even more transfer of ideas between subjects.Equivariant PDO-based networks have already been designed in prior work [21][22][23]. Most relevant for our work are PDO-eConvs [22], which can be seen as the PDO-analogon of group convolutions. * Work done during an internship at QUVA Lab Preprint. Under review.