Web crawlers are used by internet search engines to gather information about the web graph. In this paper we investigate a simple process which models such software by walking around the vertices of a graph. Once initial random vertex weights have been assigned, the robot crawler traverses the graph deterministically following a greedy algorithm, always visiting the neighbour of least weight and then updating this weight to be the highest overall. We consider the maximum, minimum and average number of steps taken by the crawler to visit every vertex of firstly, complete k-partite graphs and secondly, sparse Erdős-Rényi random graphs. Our work follows on from a paper of Bonato et. al. who introduced the model. *