This paper describes an autonomous framework for determining a robotic manipulator's optimal actions in real-time when interacting in close physical proximity to a human in a shared workspace environment. This framework allows the robot to purposefully choose to avoid physical and mental conflicts with a human companion while each agent performs tasks to complete their respective, separately-assigned goals. We pose scenarios in which the human does not need to divert attention to internally model the robot's behavior, or track or acknowledge the robot's actions during operations. The robot is meant to unobtrusively 'work around' the human rather than directly collaborate on task completion. The distinction of this work is in its use of human intent prediction (HIP) as a key factor in robot action selection for task-level planning. We choose to model HIP with a Markov Decision Process (MDP). Human state data is input into the HIP MDP policy that then outputs the predicted human intent, which we define as the best-matched or most-likely in-progress and future action-choice(s) that the human is or will be pursuing to complete mission goals. Predicted human intent is then used by a second MDP to determine the optimal policy with respect to the robot's action-choice. We present an autonomous framework that integrates the HIP MDP and robot action-choice (RAC) MDP to support autonomous close-proximity operations and propose offline and online (scaled) formulations of the two MDPs. During real-time policy execution, once the optimal action for the robot to take is determined, it is passed to the robot's path planner to be translated from a task-level command to a trajectory and motion primitives, which are then given to a low-level controller to enact. We evaluate our HIP MDP in simulation, and find that the policy output from our system is consistent and smooth across small changes in parameter values.