In approximate processing on stream data, most works focus on how to approximate online arrival data. However, the efficiency of approximation needs to consider multiple aspects. Generally, customers submit their requests with specific quality requirements (e.g., maximum error). This raises a critical problem that online quality control is required to meet the desired quality of service. Since the continuous arriving data may not be entirely stored and needs to be processed immediately, it brings the difficulty of acquiring knowledge online which significantly affects the quality of results. To address these problems, we present an online adaptive approximate processing framework with a delicate combination of data learning, sampling, and quality control. We first design an online data learning strategy for stream data. With the real-time learning results, we propose a dynamic sampling strategy that switches to different sampling methods based on the change of the load. Finally, we present a double-check error control strategy to monitor and correct large errors. Each operation module is correlated through online learning and feedback. The experiments with both synthetic and real-world datasets show that the proposed approximate framework is not only applicable to different data distributions but also provides a customized error control.