Online context-based domains such as recommendation systems strive to promptly suggest the appropriate items to users according to the information about items and users. However, such contextual information may be not available in practical, where the only information we can utilize is users' interaction data. Furthermore, the lack of clicked records, especially for the new users, worsens the performance of the system. To address the issues, similarity measuring, one of the key techniques in collaborative filtering, as well as the online context-based multiple armed bandit mechanism, are combined. The similarity between the context of a selected item and any candidate item is calculated and weighted. An adaptive method for adjusting the weights according to the passed time from clicking is proposed. The weighted similarity is then multiplied with the action value to decide which action is optimal or the poorest. Additionally, we come up with an exploration probability equation by introducing the selected times for the poorest action and the variance of the action values, to balance the exploration and exploitation. The regret analysis is given and the upper bound of the regret is proved. Empirical studies on three benchmarks, random dataset, Yahoo!R6A, and MovieLens, demonstrate the effectiveness of the proposed method.