Abstract. In the world of recommender systems, it is a common practice to use public available datasets from different application environments (e.g. MovieLens, Book-Crossing, or EachMovie) in order to evaluate recommendation algorithms. These datasets are used as benchmarks to develop new recommendation algorithms and to compare them to other algorithms in given settings. In this paper, we explore datasets that capture learner interactions with tools and resources. We use the datasets to evaluate and compare the performance of different recommendation algorithms for Technology Enhanced Learning (TEL). We present an experimental comparison of the accuracy of several collaborative filtering algorithms applied to these TEL datasets and elaborate on implicit relevance data, such as downloads and tags, that can be used to augment explicit relevance evidence in order to improve the performance of recommendation algorithms.