Often robots are required to inspect, navigate through, and/or interact with unstructured or dynamically changing environments. Adequate sensing of the environment is the enabling technology required to achieve these tasks. Given recent advances in technologies such as computational hardware and computer vision algorithms, camera-based vision systems have become a popular and rapidly growing sensor of choice for robots operating in uncertain environments.Given some images of an environment, one of the first tasks is to detect and identify interesting features in the object. These features can be distinguished based on attributes such as color, texture, motion, and contrast. Textbooks such as [4,32,52,88,90,94,98,109] provide an excellent introduction and discussion of methods to find and track these attributes from image to image. This chapter assumes that some attributes can be used to identify an object of interest in an image, and points on the object, known as "feature points," can be tracked from one image to another. By observing how these feature points move in time and space, control and estimation algorithms can be developed. The first section of this chapter describes image geometry methods using a single camera. The development of this geometry enables the subsequent sections of the chapter to focus on the control and estimation of the motion of a plane attached to an object. © 2010 by Taylor and Francis Group, LLC383. Vision-Based SystemsThe terminology visual servo control refers to the use of information from a camera directly in the feedback loop of a controller. The typical objective of most visual servo controllers is to force a hand-held camera to a Euclidean position defined by a static reference image. Yet, many practical applications require a robotic system to move along a predefined or dynamically changing trajectory. For example, a human operator may predefine an image trajectory through a high-level interface, and this trajectory may need to be modified on-the-fly to respond to obstacles moving in and out of the environment. It is also well known that a regulating controller may produce erratic behavior and require excessive initial control torques if the initial error is large. The controllers in Section 3.3 focus on the more general tracking problem, where a robot end-effector is required to track a prerecorded time-varying reference trajectory. To develop the controllers, a homography-based visual servoing approach is utilized. The motivation for using this approach is that the visual servo control problem can be incorporated with a Lyapunov-based control design strategy to overcome many practical and theoretical obstacles associated with more traditional, purely image-based approaches. Specifically, one of the challenges of this problem is that the translation error system is corrupted by an unknown depth-related parameter. By formulating a Lyapunov-based argument, an adaptive update law is developed to actively compensate for the unknown depth parameter. In addition, the presented ap...