In this paper we describe a prototype audio user interface for a Global Positioning System (GPS) that is designed to allow mobile computer users to carry out a location task while their eyes, hands or attention are otherwise engaged. Audio user interfaces for GPS have typically been designed to meet the needs of visually impaired users, and generally (though not exclusively) employ speechaudio. In contrast, our prototype system uses a simple form of non-speech, spatial audio. This paper analyses various candidate audio mappings of location and distance information. A variety of tasks, design considerations, technological opportunities and design trade-offs are considered.The findings from pilot evaluation experiments are reported. Finally, opportunities for improvements to the system and for future empirical testing are explored.