We present a collection of sensorimotor models which, when paired with a custom, mobile manipulation platform, collectively enable the autonomous deconstruction of piles of debris to facilitate mobility in cluttered spaces. The models address the problems of visual debris segmentation, object selection, and visual grasp planning. They also exercise proprioceptive grasp control, and force-controlled object extraction to enact the grasping plan. The object selector segments a debris pile into a set of parts that appear disjoint from one another and executes a rule-based decision program to select the object that appears easiest to extract. Then a grasp planner identifies a grasping point and hand preshape at a location where the gripper configuration fits the shape of the object, while also satisfying kinematic and collision-avoidance constraints. Our geometric grasp-prototype concept allows the planner to establish grasp suitability by fitting a set of shape-grasp primitives to a 2.5D depth image of the pile. The robot then applies the grasp reactively, pushing against the object until a preset resistance is met. Finally, an admittance controller guides object extraction, allowing a prescribed end-effector compliance along task-frame axes to minimize adverse forces and torques on the hand. We show experiments demonstrating the applicability of those models in isolation and in concert, including grasp tests conducted on objects representative of of a human-scale urban environment.