Reinforcement learning algorithms usually focus on a specific task, which often performs well only in the training environment. When the task changes, its performance drops significantly, with the algorithm lacking the ability to adapt to new environments and tasks. For the position control of a pneumatic continuum manipulator (PCM), there is a high degree of similarity between tasks, and the training speed of new tasks can be accelerated by utilizing the training experience from other tasks. To increase the adaptability of control policies to new tasks, this paper proposes an adaptive position control algorithm of the PCM based on the Model-Agnostic Meta-Learning (MAML) meta-reinforcement learning algorithm. The MAML meta-reinforcement learning algorithm is used to train the control strategy of PCM, and the information and experience collected during the training process across multiple tasks are used to quickly learn the control policy for new tasks, improving the ability of PCM to quickly adapt to new tasks. The experimental results demonstrate that after training with the MAML meta-reinforcement learning algorithm, the PCM can significantly reduce the training time when faced with new tasks and obtain the control policies suitable for these new tasks.