Current recommender systems consider the various aspects of items for making accurate recommendations. Different users place different importance to these aspects which can be thought of as a preference/attention weight vector. Most existing recommender systems assume that for an individual, this vector is the same for all items. However, this assumption is often invalid, especially when considering a user's interactions with items of diverse characteristics. To tackle this problem, in this paper, we develop a novel aspect-aware recommender model named A$^3$NCF, which can capture the varying aspect attentions that a user pays to different items. Specifically, we design a new topic model to extract user preferences and item characteristics from review texts. They are then used to 1) guide the representation learning of users and items, and 2) capture a user's special attention on each aspect of the targeted item with an attention network. Through extensive experiments on several large-scale datasets, we demonstrate that our model outperforms the state-of-the-art review-aware recommender systems in the rating prediction task.
Personalized hashtag recommendation methods aim to suggest users hashtags to annotate, categorize, and describe their posts. The hashtags, that a user provides to a post (e.g., a micro-video), are the ones which in her mind can well describe the post content where she is interested in. It means that we should consider both users' preferences on the post contents and their personal understanding on the hashtags. Most existing methods rely on modeling either the interactions between hashtags and posts or the interactions between users and hashtags for hashtag recommendation. These methods have not well explored the complicated interactions among users, hashtags, and micro-videos. In this paper, towards the personalized micro-video hashtag recommendation, we propose a Graph Convolution Network based Personalized Hashtag Recommendation (GCN-PHR) model, which leverages recently advanced GCN techniques to model the complicate interactions among
Rapid advances in mobile devices and cloud-based music service now allow consumers to enjoy music anytime and anywhere. Consequently, there has been an increasing demand in studying intelligent techniques to facilitate context-aware music recommendation. However, one important context that is generally overlooked is user’s venue, which often includes surrounding atmosphere, correlates with activities, and greatly influences the user’s music preferences. In this article, we present a novel venue-aware music recommender system called VenueMusic to effectively identify suitable songs for various types of popular venues in our daily lives. Toward this goal, a Location-aware Topic Model (LTM) is proposed to (i) mine the common features of songs that are suitable for a venue type in a latent semantic space and (ii) represent songs and venue types in the shared latent space, in which songs and venue types can be directly matched. It is worth mentioning that to discover meaningful latent topics with the LTM, a Music Concept Sequence Generation (MCSG) scheme is designed to extract effective semantic representations for songs. An extensive experimental study based on two large music test collections demonstrates the effectiveness of the proposed topic model and MCSG scheme. The comparisons with state-of-the-art music recommender systems demonstrate the superior performance of VenueMusic system on recommendation accuracy by associating venue and music contents using a latent semantic space. This work is a pioneering study on the development of a venue-aware music recommender system. The results show the importance of considering the influence of venue types in the development of context-aware music recommender systems.
Unsupervised video object segmentation aims to automatically segment moving objects over an unconstrained video without any user annotation. So far, only few unsupervised online methods have been reported in literature and their performance is still far from satisfactory, because the complementary information from future frames cannot be processed under online setting. To solve this challenging problem, in this paper, we propose a novel Unsupervised Online Video Object Segmentation (UOVOS) framework by construing the motion property to mean moving in concurrence with a generic object for segmented regions. By incorporating salient motion detection and object proposal, a pixel-wise fusion strategy is developed to effectively remove detection noise such as dynamic background and stationary objects. Furthermore, by leveraging the obtained segmentation from immediately preceding frames, a forward propagation algorithm is employed to deal with unreliable motion detection and object proposals. Experimental results on several benchmark datasets demonstrate the efficacy of the proposed method. Compared to the state-of-the-art unsupervised online segmentation algorithms, the proposed method achieves an absolute gain of 6.2%. Moreover, our method achieves better performance than the best unsupervised offline algorithm on the DAVIS-2016 benchmark dataset. Our code is available on the project website: https://github.com/visiontao/uovos.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.