Neural devices have the capacity to enable users to regain abilities lost due to disease or injuryfor instance, a deep brain stimulator (DBS) that allows a person with Parkinson's disease to regain the ability to fluently perform movements or a Brain Computer Interface (BCI) that enables a person with spinal cord injury to control a robotic arm. While users recognize and appreciate the technologies' capacity to maintain or restore their capabilities, the neuroethics literature is replete with examples of concerns expressed about agentive capacities: A perceived lack of control over the movement of a robotic arm might result in an altered sense of feeling responsible for that movement. Clinicians or researchers being able to record and access detailed information of a person's brain might raise privacy concerns. A disconnect between previous, current, and future understandings of the self might result in a sense of alienation. The ability to receive and interpret sensory feedback might change whether someone trusts the implanted device or themselves. Inquiries into the nature of these concerns and how to mitigate them has produced scholarship that often emphasizes one issueresponsibility, privacy, authenticity, or trustselectively. However, we believe that examining these ethical dimensions separately fails to capture a key aspect of the experience of living with a neural device. In exploring their interrelations, we argue that their mutual significance for neuroethical research can be adequately captured if they are described under a unified heading of agency. On these grounds, we propose an "Agency Map" which brings together the diverse neuroethical dimensions and their interrelations into a comprehensive framework. With this, we offer a theoretically-grounded approach to understanding how these various dimensions are interwoven in an individual's experience of agency.