Many repetitive control problems are characterized by the fact that disturbances have the same effect in each successive execution of the same control task. Such disturbances comprise the lumped representation of unmodeled parts of the open-loop system dynamics, a systematic model-mismatch or, more generally, deterministic yet unknown uncertainty. In such cases, well-known strategies for iterative learning control are based on enhancing the system behavior not only by exploiting data gathered during a single execution of the task but also using information from previous executions. The corresponding dual problem, namely, iterative learning state and disturbance estimation has not yet received the same amount of attention. However, it is obvious that improved estimates for the aforementioned states and disturbances which periodically occur in each execution will be a means to achieve an improved accuracy and, therefore, in future work also to optimize the control accuracy. In this paper, we present a joint design procedure for observer gains in two independent dimensions, a gain for processing information in the temporal domain during a single execution of the task (also named trial) and a gain for learning in the iteration domain (i.e., from trial to trial).