Research into computer vision techniques has far outpaced the development of interfaces (such as APIs) to support the techniques' accessibility, especially to developers who are not experts in the field. We present a new interface, specifically for segmentation methods, designed to be application-developer-friendly while retaining sufficient power and flexibility to solve a wide variety of problems. The interface presents segmentation at a higher level (above algorithms) and uses a task-based description derived from definitions of low-level segmentation. We show that through interpretation, the description can be used to invoke an appropriate method to provide the developer's requested result. Our proof-of-concept implementation interprets the model description and invokes one of six segmentation methods with automatically derived parameters, which we demonstrate on a range of segmentation tasks. We also discuss how the concepts presented for segmentation may be extended to other computer vision problems.