In recent years, both the scientific community and the industry have focused on moving computational resources with remote data centres from the centralized cloud to decentralised computing, making them closer to the source or the so called ''edge'' of the network. This is due to the fact that the cloud system alone cannot sufficiently support the huge demands of future networks with the massive growth of new, time-critical applications such as self-driving vehicles, Augmented Reality/Virtual Reality techniques, advanced robotics and critical remote control of smart Internet-of-Things applications. While decentralised edge computing will form the backbone of future heterogeneous networks, it still remains at its infancy stage. Currently, there is no comprehensive platform. In this article, we propose a novel decentralised edge architecture, a solution called OMNIBUS, which enables a continuous distribution of computational capacity for end-devices in different localities by exploiting moving vehicles as storage and computation resources. Scalability and adaptability are the main features that differentiate the proposed solution from existing edge computing models. The proposed solution has the potential to scale infinitely, which will lead to a significant increase in network speed. The OMNIBUS solution rests on developing two predictive models: (i) to learn timing and direction of vehicular movements to ascertain computational capacity for a given locale, and (ii) to introduce a theoretical framework for sequential to parallel conversion in learning, optimisation and caching under contingent circumstances due to vehicles in motion.INDEX TERMS Edge computing, 5G, 6G, V2X, ubiquitous AI, distributed AI, multi-access edge computing (MEC).