It is crucial for embedded systems to adapt to the dynamics of open environments. This adaptation process becomes especially challenging in the context of multiagent systems. In this paper, we argue that multiagent meta-level control is an effective way to determine when this adaptation process should be done and how much effort should be invested in adaptation as opposed to continuing with the current action plan. We use a reinforcement learning based local optimization algorithm within each agent to learn multiagent meta-level control agent policies in a decentralized fashion. These policies will allow each agent to adapt to changes in environmental conditions while reorganizing the underlying multiagent network when needed. We then augment the agent with a heuristic rule-based algorithm that uses information provided by the reinforcement learning algorithm in order to resolve conflicts among agent policies from a local perspective at both learning and execution stages. We evaluate this mechanism in the context of a multiagent tornado tracking application called NetRads. Empirical results show that adaptive multiagent meta-level control significantly improves the performance of the tornado tracking network for a variety of weather scenarios. ject level, see Fig. 1), which involves the agent making decisions about what domain-level problem solving to perform in the current context and how to coordinate with other agents to complete tasks requiring joint effort. Each agent also has a higher control level that is meta-level control (see Fig. 1), which involves the agent making decisions about deliberation control itself including whether to deliberate, how many resources to dedicate to this deliberation, and what specific deliberative control to perform in the current context. In the context of MAS, the meta-level component of each agent should have a multiagent policy that coordinates its deliberation with other agents to account for what could happen as deliberation (and execution) plays out. Figure 2 describes the interaction among the meta-level control components of multiple agents.Meta-level control in complex agent-based settings was explored in previous work [2,3,33,34,41] where a sophisticated architecture that could reason about