The human brain is never at "rest"; its activity is constantly fluctuating over time, transitioning from one brain state--a whole-brain pattern of activity--to another. Network control theory offers a framework for understanding the effort -- energy -- associated with these transitions. One branch of control theory that is especially useful in this context is "optimal control", in which input signals are used to selectively drive the brain into a target state. Typically, these inputs are introduced independently to the nodes of the network (each input signal is associated with exactly one node). Though convenient, this input strategy ignores the continuity of cerebral cortex -- geometrically, each region is connected to its spatial neighbors, allowing control signals, both exogenous and endogenous, to spread from their foci to nearby regions. Here, we adapt the network control model so that input signals have a spatial extent that decays exponentially from the input site. We show that this more realistic strategy takes advantage of spatial dependencies in structural connectivity and activity to reduce the energy (effort) associated with brain state transitions. We further leverage these dependencies to explore near-optimal control strategies such that, on a per-transition basis, the number of input signals required for a given control task is reduced, in some cases by two orders of magnitude. This approximation yields network-wide maps of input site density, which we compare to an existing database of functional, metabolic, genetic, and neurochemical maps, finding a close correspondence. Ultimately, not only do we propose a more efficient framework that is also more adherent to well-established brain organizational principles, but we also posit neurobiologically grounded bases for optimal control.