Area V6A encodes hand configurations for grasping objects (Fattori et al., 2010). The aim of the present study was to investigate whether V6A cells also encode three-dimensional objects, and the relationship between object encoding and grip encoding. Single neurons were recorded in V6A of two monkeys trained to perform two tasks. In the first task, the monkeys were required to passively view an object without performing any action on it. In the second task, the monkeys viewed an object at the beginning of each trial and then they needed to grasp that object in darkness. Five different objects were used. Both tasks revealed that object presentation activates ϳ60% of V6A neurons, with about half of them displaying object selectivity. In the Reach-to-Grasp task, the majority of V6A cells discharged during both object presentation and grip execution, displaying selectivity for either the object or the grip, or in some cases for both object and grip. Although the incidence of neurons encoding grips was twofold that of neurons encoding objects, object selectivity in single cells was as strong as grip selectivity, indicating that V6A cells were able to discriminate both the different objects and the different grips required to grasp them.Hierarchical cluster analysis revealed that clustering of the object-selective responses depended on the task requirements (view only or view to grasp) and followed a visual or a visuomotor rule, respectively. Object encoding in V6A reflects representations for action, useful for motor control in reach-to-grasp.