A man with a spinal-cord injury (right) prepares for a virtual cycle race in which competitors steer avatars using brain signals. COMMENT © 2 0 1 7 M a c m i l l a n P u b l i s h e r s L i m i t e d , p a r t o f S p r i n g e r N a t u r e . A l l r i g h t s r e s e r v e d .example. Moreover, researchers can already interpret a person's neural activity from functional magnetic resonance imaging scans at a rudimentary level 1 -that the individual is thinking of a person, say, rather than a car.It might take years or even decades until BCI and other neurotechnologies are part of our daily lives. But technological developments mean that we are on a path to a world in which it will be possible to decode people's mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals could communicate with others simply by thinking; and where powerful computational systems linked directly to people's brains aid their interactions with the world such that their mental and physical abilities are greatly enhanced.Such advances could revolutionize the treatment of many conditions, from brain injury and paralysis to epilepsy and schizophrenia, and transform human experience for the better. But the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.It is crucial to consider the possible ramifications now.The Morningside Group comprises neuroscientists, neurotechnologists, clinicians, ethicists and machine-intelligence engineers. It includes representatives from Google and Kernel (a neurotechnology start-up in Los Angeles, California); from international brain projects; and from academic and research institutions in the United States, Canada, Europe, Israel, China, Japan and Australia. We gathered at a workshop sponsored by the US National Science Foundation at Columbia University, New York, in May 2017 to discuss the ethics of neurotechnologies and machine intelligence.We believe that existing ethics guidelines are insufficient for this realm 2 . These include the Declaration of Helsinki, a statement of ethical principles first established in 1964 for medical research involving human subjects (go.nature.com/2z262ag); the Belmont Report, a 1979 statement crafted by the US National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research (go.nature.com/2hrezmb); and the Asilomar artificial intelligence (AI) statement of cautionary principles, published early this year and signed by business leaders and AI researchers, among others (go.nature.com/2ihnqac).To begin to address this deficit, here we lay out recommendations relating to four areas of concern: privacy and consent; agency and identity; augmentation; and bias. Different nations and people of varying re...