This paper evaluates a blended learning methodology for Relational Database Systems. Our module offers students a range of interconnected tools and teaching resources. Among them is TestSQL, a query tool giving the students automated feedback on SQL query exercises; but we do not use it to assess the students. Instead assessment is through a range of questions which test not only SQL writing skills, but also other aspects of the field, including questions on optimisation, physical modelling, PL/SQL, and indirect questions on SQL knowledge, such as processing order.The effectiveness of the approach is investigated through a survey of student attitudes', and assessment data. Our analysis shows, unsurprisingly, that the students' use of more resources correlates significantly with better results; but also that success at the different sub-topics tested is not at all well correlated, which shows that students can master some topics while remaining weak at others; and finally, that indirect SQL questions is best predictor of success at each of the other sub-topics. This last result confirms our choice to broaden the testing of SQL skills, and has implications for the use automated SQL assessment tools: we recommend that in automated testing for Database Systems, SQL writing tests be complemented with indirect questions on keyword use, parsing, or error recognition aimed at revealing broader abilities of learners.