The efficient computation of viewpoints while considering various system and process constraints is a common challenge that any robot vision system is confronted with when trying to execute a vision task. Although fundamental research has provided solid and sound solutions for tackling this problem, a holistic framework that poses its formal description, considers the heterogeneity of robot vision systems, and offers an integrated solution remains unaddressed. Hence, this publication outlines the generation of viewpoints as a geometrical problem and introduces a generalized theoretical framework based on Feature-Based Constrained Spaces (C-spaces) as the backbone for solving it. A C-space can be understood as the topological space that a viewpoint constraint spans, where the sensor can be positioned for acquiring a feature while fulfilling the constraint. The present study demonstrates that many viewpoint constraints can be efficiently formulated as C-spaces, providing geometric, deterministic, and closed solutions. The introduced C-spaces are characterized based on generic domain and viewpoint constraints models to ease the transferability of the present framework to different applications and robot vision systems. The effectiveness and efficiency of the concepts introduced are verified on a simulation-based scenario and validated on a real robot vision system comprising two different sensors.