This paper presents a model-free optimization framework for the visual servoing of eye-in-hand manipulators in cluttered environments. Visual feedback is used to solve for a set of feasible trajectories that bring the robot end-effector to a target object at a previously untaught location under a number of challenging constraints (i.e., whole-arm collisions, object occlusions, robot's joint limits, camera's sensing limits). A novel controller is proposed, which exploits the natural by-products of the teach-by-showing process, to help the robot navigate this non-convex space. Examining the user-demonstrated trajectories that lead up to the reference image, we use a combination of stochastic optimization techniques and classical optimization techniques to extract the relevant cost functions and constraints for servoing. We hypothesize that we can leverage the user's sensory capabilities and knowledge of the workspace to alleviate the burden of modeling system constraints explicitly. We verify this hypothesis via realistic experiments on a Barrett WAM 7-DOF manipulator equipped with a Sony XC-HR70 camera to show the comparative efficacy of this approach.