Abstract-Where-What Networks (WWNs) is a series of developmental networks for the recognition and attention of complex visual scenes. One of the most critical challenges of autonomous development is task non-specificity, namely, the network is meant to learn a variety of open-ended task skills without pre-defined tasks. Then how does a brain-like network develop skills for object relation that can generalize using implicit symbol-like rules? A preliminary scheme of uniform synaptic maintenance, which works across a neuron's sensory and motor domains, has been proposed in our WWN-9. In the new work here, we show that cross-domain and within-domain synaptic maintenance gains superior generalization than using the uniform synaptic maintenance scheme. This generalization enables the WWN to automatically discover symbol-like but implicit rules -detecting object groups from new combinations of object locations that were never observed. By "symbol-like but implicit rules", we mean that the development program has no symbols and explicit rules, but symbol-like concepts (location, type) and implicit rule (two specific type objects must present concurrently -group) emerge as the firing patterns of the motor area and are used by the control. Moreover, the process of synaptic maintenance corresponds to the genesis (and adaptation) of cell connections and our model autonomously develops the Y area into two subarea, early area and later area, in charge of pattern recognition and symbolic reasoning respectively.