When we grasp and lift novel objects, we rely on visual cues and sensorimotor memories to predictively scale our finger forces and exert compensatory torques according to object properties. Recently, it was shown that object appearance, previous force scaling errors, and previous torque compensation errors strongly impact our percept. However, the influence of visual geometric cues on the perception of object torques and weights in a grasp to lift task is poorly understood. Moreover, little is known about how visual cues, prior expectations, sensory feedback, and sensorimotor memories are integrated for anticipatory torque control and object perception. Here, 12 young and 12 elderly participants repeatedly grasped and lifted an object while trying to prevent object tilt. Before each trial, we randomly repositioned both the object handle, providing a geometric cue on the upcoming torque, as well as a hidden weight, adding an unforeseeable torque variation. Before lifting, subjects indicated their torque expectations, as well as reporting their experience of torque and weight after each lift. Mixed-effect multiple regression models showed that visual shape cues governed anticipatory torque compensation, whereas sensorimotor memories played less of a role. In contrast, the external torque and committed compensation errors at lift-off mainly determined how object torques and weight were perceived. The modest effect of handle position differed for torque and weight perception. Explicit torque expectations were also correlated with anticipatory torque compensation and torque perception. Our main findings generalized across both age groups. Our results suggest distinct weighting of inputs for action and perception according to reliability.