This paper investigates the agile optical satellite scheduling problem, which aims to arrange an observation sequence and observation actions for observation tasks. Existing research mainly aims to maximize the number of completed tasks or the total priorities of the completed tasks but ignores the influence of the observation actions on the imaging quality. Besides, the conventional exact methods and heuristic methods can hardly obtain a high-quality solution in a short time due to the complicated constraints and considerable solution space of this problem. Thus, this paper proposes a two-stage scheduling framework with two-stage deep reinforcement learning to address this problem. First, the scheduling process is decomposed into a task sequencing stage and an observation scheduling stage, and a mathematical model with complex constraints and twostage optimization objectives is established to describe the problem. Then, a pointer network with a local selection mechanism and a rough pruning mechanism is constructed as the sequencing network to generate an executable task sequence in the task sequencing stage. Next, a decomposition strategy decomposes the executable task sequence into multiple sub-sequences in the observation scheduling stage, and the observation scheduling process of these sub-sequences is modeled as a concatenated Markov decision process. A neural network is designed as the observation scheduling network to determine observation actions for the sequenced tasks, which is well trained by the soft actor-critic algorithm. Finally, extensive experiments show that the proposed method, along with the designed mechanisms and strategy, is superior to comparison algorithms in terms of solution quality, generalization performance, and computation efficiency.