Autonomous operation of small UAVs in cluttered environments requires three important foundations: fast and accurate knowledge about position in the world for control; obstacle detection and avoidance for safe flight; and all of this has to be executed in real-time onboard the vehicle. This is a challenge for micro air vehicles, since their limited payload demands small, lightweight, and low-power sensors and processing units, favoring vision-based solutions that run on small embedded computers equipped with smart phone-based processors. In the following chapter, we present the JPL autonomous navigation framework for micro air vehicles to address these challenges. Our approach enables power-up-and-go deployment in highly cluttered environments without GPS, using information from an IMU and a single downward-looking camera for pose estimation, and a forward-looking stereo camera system for disparity-based obstacle detection and avoidance. As an example of a high-level navigation task that builds on these autonomous capabilities, we introduce our approach for autonomous landing on elevated flat surfaces, such as rooftops, using only monocular vision inputs from the downward-looking camera.