Compressed sensing is a novel technology that exploits sparsity of a signal to perform sampling below the Nyquist rate, and thus has great potential in low-complexity video sampling and compression applications, due to the significant reduction of the sampling rate ( ) and computational complexity. However, most current work about compressive video sampling (CVS) has focused on real-valued measurements without being quantized, and thus is not applicable to engineering practices. Moreover, in many circumstances, the total number of bits is often constrained. Therefore, how to achieve a compromise between the number of measurements and the number of bits per measurement to maximize the visual quality is a great challenge for CVS, which has still not been addressed in literature. In this paper, we first present a novel distortion model that reveals the relationship between distortion, , and quantization bit-depth ( ). Then, using this model, we propose a joint optimization algorithm, by which we are able to easily derive the values of and . Finally, we present an adaptive and unidirectional CVS framework with rate-distortion (RD) optimized rate allocation, wherein we use video characteristics extracted from partial sampling to allocate the required bits for each block, and then implement "optimized" video sampling and measurement quantization with the estimated and , respectively. Simulation results show that our proposal offers comparable RD performance to the conventional method, with a 4.6 dB improvement in the average PSNR.