The expectation-based 4D approach to dynamic machine vision exploiting integral spatiotemporal models of objects in the real world is discussed in the application domains of unmanned ground and air vehicles. The method has demonstrated superior performance over the last half decade in autonomous road vehicle guidance with three different vans and busses, with an AGV on the factory floor and with completely autonomous relative state estimation for a twin turboprop aircraft in the landing approach to a runway without any external support; in all application areas only a small set of conventional microcomputers was sufficient for realizing the system. This shows the computational efficiency of the method combining both conventional engineering type algorithms and artificial intelligence components in a well balanced way.The modularity of the approach is demonstrated in a simulation set-up serving both the ground-and the air vehicle applications. Experimental results in both areas are discussed.