In visual search systems, it is important to address the issue of how to leverage the rich contextual information in a visual computational model to build more robust visual search systems and to better satisfy the user’s need and intention. In this paper, we introduced a ranking model by understanding the complex relations within product visual and textual information in visual search systems. To understand their complex relations, we focused on using graph-based paradigms to model the relations among product images, product category labels, and product names and descriptions. We developed a unified probabilistic hypergraph ranking algorithm, which, modeling the correlations among product visual features and textual features, extensively enriches the description of the image. We conducted experiments on the proposed ranking algorithm on a dataset collected from a real e-commerce website. The results of our comparison demonstrate that our proposed algorithm extensively improves the retrieval performance over the visual distance based ranking.
Recommendation becomes a mainstream feature in nowadays e-commerce because of its significant contributions in promoting revenue and customer satisfaction. Given hundreds of millions of user activity logs and product items, accurate and efficient recommendation is a challenging computational task. This paper introduces a new soft hierarchical clustering algorithm-Fuzzy Hierarchical Co-clustering (FHCC) algorithm, and applies this algorithm to detect user-product joint groups from users' behavior data for collaborative filtering recommendation. Via FHCC, complex relations among different data sources can be analyzed and understood comprehensively. Besides, FHCC is able to adapt to different types of applications according to the accessibility of data sources by carefully adjust the weights of different data sources. Experimental evaluations are performed on a benchmark rating dataset to extract user-product co-clusters. The results show that our proposed approach provide more meaningful recommendation results, and outperforms existing item-based and user-based collaborative filtering recommendations in terms of accuracy and ranked position.
Feature extraction is an essential step in various image processing and computer vision tasks, such as object recognition, image retrieval, 3D construction, virtual reality, and so on. Design of feature extraction method is probably the single most important factor in achieving high performance of various tasks. Different applications create different challenges and requirements for the design of visual features. In this paper, we explored and investigated the effectiveness of different combinations of promising local feature detectors and descriptors for non-rigid 3D objects. Different configurations of visual feature detectors and descriptors have been enumerated, and each configuration has been evaluated by image matching accuracy. The results indicated that the scale-invariant feature transform feature detector and descriptor achieved the best overall performance in describing local features of non-rigid 3D object.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.