Abstract. In this paper we describe a system that combines human input and automatic grasp planning for controlling an artificial hand, with applications in the area of hand neuroprosthetics. We consider the case where a user attempts to grasp an object using a robotic hand, but has no direct control over the hand posture. An automated grasp planner searches for stable grasps of the target object and shapes the hand accordingly, allowing the user to successfully complete the task. We rely on two methods for achieving the computational rates required for effective user interaction: first, grasp planning is performed in a hand posture subspace of highly reduced dimensionality; second, our system uses real-time input provided by the human user, further simplifying the search for stable grasps to the point where solutions can be found at interactive rates. We demonstrate our approach on a number of different hand models and target objects, in both real and virtual environments.