Deep learning has been used for optimizing a multitude of wireless problems. Yet most existing works assume that training and test samples are drawn from the same distribution, which is not true in dynamic environments. As a result, a deep neural network (DNN) may require retraining with newly collected samples to adapt to a new scenario. The retraining complexity can be reduced by introducing inductive biases, which can either be learned automatically or be embedded in DNNs manually. For example, inductive biases can be learned from related tasks by meta-learning in the form of good initializations, reusable modules and feature extractors, or be introduced by designing structural DNNs such as graph neural networks (GNNs) with appropriate permutation equivariance (PE) properties. All previous works on meta-learning overlooked the prior-known PE properties, which widely exist in wireless policies and can be leveraged to improve sample efficiency. This article reviews meta-learning for wireless communications and compares it with GNNs trained for a single task. We first introduce meta-learning by comparing it with conventional learning, analyze its inductive biases, and review its applications for radio transmission. We then compare meta-learning with GNNs trained for a single task by taking beamforming problem as an example to show their pros and cons in solving the mismatch issues. Simulation results show that GNNs are sample efficient while meta-learning can reduce the time complexity for adaptation. Both the considered GNNs and meta-learning methods are inefficient for adapting to environments with time-varying problem scales.