Business ecosystems consist of a heterogeneous and continuously evolving set of entities that are interconnected through a complex, global network of relationships. However, there is no well-established methodology to study the dynamics of this network. Traditional approaches have primarily utilized a single source of data of relatively established firms; however, these approaches ignore the vast number of relevant activities that often occur at the individual and entrepreneurial levels. We argue that a data-driven visualization approach, using both institutionally and socially curated datasets, can provide important complementary, triangulated explanatory insights into the dynamics of interorganizational networks in general and business ecosystems in particular. We develop novel visualization layouts to help decision makers systemically identify and compare ecosystems. Using traditionally disconnected data sources on deals and alliance relationships (DARs), executive and funding relationships (EFRs), and public opinion and discourse (POD), we empirically illustrate our data-driven method of data triangulation and visualization techniques through three cases in the mobile industry Google’s acquisition of Motorola Mobility, the coopetitive relation between Apple and Samsung, and the strategic partnership between Nokia and Microsoft. The article concludes with implications and future research opportunities.
The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor-quality data during training, that is, garbage in, garbage out. Active learning aims to remedy this problem by focusing on obtaining better-quality data that more aptly reflects a user's preferences. However, traditional evaluation of active learning strategies has two major flaws, which have significant negative ramifications on accurately evaluating the system's performance (prediction error, precision, and quantity of elicited ratings). (1) Performance has been evaluated for each user independently (ignoring system-wide improvements). (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition).
In this article we show that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system centric). We propose a new evaluation methodology and use it to evaluate some novel and state-of-the-art rating elicitation strategies. We found that the system-wide effectiveness of a rating elicitation strategy depends on the stage of the rating elicitation process, and on the evaluation measures (MAE, NDCG, and Precision). In particular, we show that using some common user-centric strategies may actually degrade the overall performance of a system. Finally, we show that the performance of many common active learning strategies changes significantly when evaluated concurrently with the natural acquisition of ratings in recommender systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.