Relational reinforcement learning is a promising branch of reinforcement learning research that deals with structured environments. In these environments, states and actions are differentiated by the presence of certain types of objects and the relations between them and the objects that are involved in the actions. This makes it ultimately suited for tasks that require the manipulation of multiple, interacting objects, such as tasks that a future house-holding robot can be expected to perform like cleaning up a dinner table or storing away done dishes. However, the application of relational reinforcement learning to robotics has been hindered by assumptions such as discrete and atomic state observations. Typical robotic observation systems work in a streaming setup, where objects are discovered and recognized and their placement within their surroundings is determined in a quasi continuous manner instead of a state based one. The resulting information stream can be compared to a set of multiple inter-related data streams. In this paper, we propose an adaptive windowing strategy for generating a stream of learning examples and enabling relational learning from this kind of data. Our approach is independent from the learning algorithm and is based on a gradient search over the space of parameter values, i.e., window sizes, guided by the estimation of the testing error. The proposed algorithm performs online and is data driven and flexible. To the best of our knowledge, this is the first work addressing this problem. Our ideas are empirically supported by an extensive experimental evaluation in a controlled setup using artificial data.