Chatbots are expected to be knowledgeable across multiple domains, e.g. for daily chit-chat, exchange of information, and grounding in emotional situations. To effectively measure the quality of such conversational agents, a modelbased automatic dialogue evaluation metric (ADEM) is expected to perform well across multiple domains. Despite significant progress, existing ADEMs tend to perform well only on data that are similar to its training data (overfit to its training domain). This calls for a domain-generalized metric that can assess dialogues of different characteristics. To this end, we propose a Panel of Experts (PoE), a multitask network that consists of a shared transformer encoder and a collection of lightweight adapters. The shared encoder captures the general knowledge of dialogues across domains, while each adapter specializes in one specific domain and serves as a domain expert. To validate the idea, we construct a high-quality multi-domain dialogue dataset leveraging data augmentation and pseudo-labeling. The PoE network is comprehensively assessed on 16 dialogue evaluation datasets spanning a wide range of dialogue domains. It achieves state-of-the-art performance in terms of mean Spearman correlation over all the evaluation datasets. It exhibits better zeroshot generalization than existing state-of-the-art ADEMs and the ability to easily adapt to new domains with few-shot transfer learning.