Many of our sequential activities require that behaviors must be both precisely timed and put in the proper order. This paper presents a neuro-computational model based on the theoretical framework of Dynamic Neural Fields that supports the rapid learning and flexible adaptation of coupled order-timing representations of sequential events. A key assumption is that elapsed time is encoded in the monotonic buildup of self-stabilized neural population activity representing event memory. A stable activation gradient over subpopulations carries the information of an entire sequence. With robotics applications in mind, we test the model in simulations of a learning by observation paradigm, in which the cognitive agent first memorizes the order and relative timing of observed events and, subsequently, recalls the information from memory taking potential speed constraints into account. Model robustness is tested by systematically varying sequence complexity along the temporal and the ordinal dimension. Furthermore, an adaptation rule is proposed that allows the agent to adjust in a single trial a learned timing pattern to a changing temporal context. The simulation results are discussed with respect to our goal to endow autonomous robots with the capacity to efficiently learn complex sequences with time constraints, supporting more natural human-robot interactions.