Bus timetable optimization is a key issue to reduce operational cost of bus companies and improve the service quality. Existing methods use exact or heuristic algorithms to optimize the timetable in an offline manner. In practice, the passenger flow may change significantly over time. Timetables determined in offline cannot adjust the departure interval to satisfy the changed passenger flow. Aiming at improving the online performance of bus timetable, we propose a Deep Reinforcement Learning based bus Timetable dynamic Optimization method (DRL-TO). In this method, the timetable optimization is considered as a sequential decision problem. A Deep Q-Network (DQN) is employed as the decision model to determine whether to dispatch a bus service during each minute of the service period. Therefore, the departure intervals of bus services are determined in real time in accordance with passenger demand. We identify several new and useful state features for the DQN, including the load factor, carrying capacity utilization rate, and the number of stranding passengers. Taking into account both the interests of the bus company and passengers, a reward function is designed, which includes the indicators of full load rate, empty load rate, passengers' waiting time, and the number of stranding passengers. Building on an existing method for calculating the carrying capacity, we develop a new technique to enhance the matching degree at each bus station. Experiments demonstrate that compared with the timetable generated by the state-of-the-art bus timetable optimization approach based on a memetic algorithm (BTOA-MA), Genetic Algorithm (GA) and the manual method, DRL-TO can dynamically determine the departure intervals based on the real-time passenger flow, saving 8% of vehicles and reducing 17% of passengers' waiting time on average.