Many methods have been proposed to automatically extend knowledge bases, but the vast majority of these methods focus on finding plausible missing facts, and knowledge graph triples in particular. In this paper, we instead focus on automatically extending ontologies that are encoded as a set of existential rules. In particular, our aim is to find rules that are plausible, but which cannot be deduced from the given ontology. To this end, we propose a graphbased representation of rule bases. Nodes of the considered graphs correspond to predicates, and they are annotated with vectors encoding our prior knowledge about the meaning of these predicates. The vectors may be obtained from external resources such as word embeddings or they could be estimated from the rule base itself. Edges connect predicates that co-occur in the same rule and their annotations reflect the types of rules in which the predicates co-occur. We then use a neural network model based on Graph Convolutional Networks (GCNs) to refine the initial vector representation of the predicates, to obtain a representation which is predictive of which rules are plausible. We present experimental results that demonstrate the strong performance of this method.