Radial-basis-function networks are traditionally defined for sets of vector-based observations. In this short paper, we reformulate such networks so that they can be applied to adjacency-matrix representations of weighted, directed graphs that represent the relationships between object pairs. We restate the sum-of-squares objective function so that it is purely dependent on entries from the adjacency matrix. From this objective function, we derive a gradient descent update for the network weights. We also derive a gradient update that simulates the repositioning of the radial basis prototypes and changes in the radial basis prototype parameters. An important property of our radial basis function networks is that they are guaranteed to yield the same responses as conventional radial-basis networks trained on a corresponding vector realization of the relationships encoded by the adjacency-matrix. Such a vector realization only needs to provably exist for this property to hold, which occurs whenever the relationships correspond to distances from some arbitrary metric applied to a latent set of vectors. We therefore completely avoid needing to actually construct vectorial realizations via multi-dimensional scaling, which ensures that the underlying relationships are totally preserved.Conventional radial-basis-function (RBF) networks have a feed-forward architecture that consists of two layers: a non-linear hidden layer followed by a linear output layer. The hidden-layer processing elements operate on the weighted distance between a vector observation and some other vector, which is referred to as either an RBF prototype or an RBF center. The RBF prototypes specify the position of a local receptive field. The response of each processing element in this network layer is a non-linear, radially-symmetric function of this observation-prototype distance. The hidden-layer responses are then weighted and, usually, linearly combined by at the output layer.These networks, as conceived by Broomhead and Lowe [1], rely on a vector-based paradigm: they are applied solely to feature-based vectorial observations [2]. Here, we provide a reformulation of RBF networks so that they can be applied to adjacency-matrix representations of weighted, directed graphs. The adjacency representations are symmetric, positive, anti-reflexive matrices of relationship-based observations between object pairs. Such types of observations are prevalent in a number of problem domains, as investigators may be unable to extract meaningful features about various observations yet can easily codify the relationships between them. Pertinent examples include assessing shape similarity and quantifying the relationship between gene ontology products.The graph-based RBF networks that we consider have a feed-forward architecture analogous to that of vectorbased RBF networks. That is, the hidden layer non-linearly transforms the weighted relationship-based observations while the output layer weights and linearly combines those transformed results. There are, however,...