Network representation learning, as an approach to learn low dimensional representations of vertices, has attracted considerable research attention recently. It has been proven extremely useful in many machine learning tasks over large graph. Most existing methods focus on learning the structural representations of vertices in a static network, but cannot guarantee an accurate and efficient embedding in a dynamic network scenario. The fundamental problem of continuously capturing the dynamic properties in an efficient way for a dynamic network remains unsolved. To address this issue, we present an efficient incremental skip-gram algorithm with negative sampling for dynamic network embedding, and provide a set of theoretical analyses to characterize the performance guarantee. Specifically, we first partition a dynamic network into the updated, including addition/deletion of links and vertices, and the retained networks over time. Then we factorize the objective function of network embedding into the added, vanished and retained parts of the network. Next we provide a new stochastic gradient-based method, guided by the partitions of the network, to update the nodes and the parameter vectors. The proposed algorithm is proven to yield an objective function value with a bounded difference to that of the original objective function. The first order moment of the objective difference converges in order of O( 1n 2 ), and the second order moment of the objective difference can be stabilized in order of O(1). Experimental results show that our proposal can significantly reduce the training time while preserving the comparable performance. We also demonstrate the correctness of the theoretical analysis and the practical usefulness of the dynamic network embedding. We perform extensive experiments on multiple real-world large network datasets over multi-label classification and link prediction tasks to evaluate the effectiveness and efficiency of the proposed framework, and up to 22 times speedup has been achieved.
Uncertainty before purchase often gives rise to postpurchase emotions that consumers might anticipate when making purchase decisions. Our study investigates how consumers' anticipated postpurchase regret affects their optimal search behavior and how this affects firms' price and assortment competition. The key tension considered in this study is that consumers balance between saving the cost of product evaluation by searching less and alleviating the potential postpurchase regret on their purchase by searching more. We use a classical sequential search framework to examine this key tension. Our results show that anticipated regret encourages more intense search across competitive alternatives, leading to an intensified price competition when search depth is exogenous (searching a fixed number of attributes) or when search depth is endogenous but full‐depth search (inspecting all attributes) emerges (with high regret intensity). However, when search depth is endogenous but partial‐depth search (inspecting a subset of attributes) emerges (with low regret intensity), the blessing effect of anticipated regret on softening firms' price competition begins to emerge. In addition, anticipated regret can achieve a “win‐win‐win” situation for consumers, firms, and the social planner. Moreover, multiproduct firms use different competitive devices with different levels of regret intensity: When regret intensity is low (high), firms focus on assortment (price) competition to retain consumers. The relevant managerial implications are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.