Digital trees, also known as tries, are a general purpose exible data structure that implements dictionaries built on sets of words. An analysis is given of three major representations of tries in the form of array-tries, list tries, and bst-tries ternary search tries. The size and the search costs of the corresponding representations are analysed precisely in the average case, while a complete distributional analysis of height of tries is given. The unifying data model used is that of dynamical sources and it encompasses classical models like those of memoryless sources with independent symbols, of nite Markov c hains, and of nonuniform densities. The probabilistic behaviour of the main parameters, namely size, path length, or height, appears to be determined by two intrinsic characteristics of the source: the entropy and the probability of letter coincidence. These characteristics are themselves related in a natural way to spectral properties of speci c transfer operators of the Ruelle type.Key-words: Information theory, dynamical sources, analysis of algorithms, digital trees, tries, ternary search tries, transfer operator, continued fractions. Abstract. Digital trees, also known as tries, are a general purpose exible data structure that implements dictionaries built on sets of words. An analysis is given of three major representations of tries in the form of array-tries, list tries, and bst-tries ternary search tries. The size and the search costs of the corresponding representations are analysed precisely in the average case, while a complete distributional analysis of height of tries is given. The unifying data model used is that of dynamical sources and it encompasses classical models like those of memoryless sources with independent symbols, of nite Markov chains, and of nonuniform densities. The probabilistic behaviour of the main parameters, namely size, path length, or height, appears to be determined by two i n trinsic characteristics of the source: the entropy and the probability o f letter coincidence. These characteristics are themselves related in a natural way to spectral properties of speci c transfer operators of the Ruelle type.
Although much is known about how brokerage positions in social networks help individuals improve their own performance, we know little about the impact of brokers on those around them. Our study investigates brokerage as a public good. We focus on the positive and negative externalities of specific kinds of brokers: ''hubs,'' who act as the main interfaces between members of their own network community (''network neighbors'') and members of other communities. Because hubs access diverse knowledge and perspectives, they create positive externalities by providing novel ideas to their network neighbors. But hubs also generate negative externalities: extensive cross-community activity puts heavy demands on their attention and time, so that hubs may not provide strong commitment to their neighbors' projects. Because of this, network neighbors experience different externalities from hubs depending on their own formal role in projects. We use insights from our fieldwork in the French television game show industry to illustrate the mechanisms at play, and we test our theory with archival data on this industry from 1995 to 2012. Results suggest that the positive externalities of hubs help their neighbors contribute to the success of projects when these neighbors hold creativity-focused roles; yet the negative externalities of hubs hinder their neighbors' contributions when they hold efficiency-focused roles.
We revisit the classical QuickSort and QuickSelect algorithms, under a complexity model that fully takes into account the elementary comparisons between symbols composing the records to be processed. Our probabilistic models belong to a broad category of information sources that encompasses memoryless (i.e., independent-symbols) and Markov sources, as well as many unbounded-correlation sources. We establish that, under our conditions, the average-case complexity of QuickSort is O(n log 2 n) [rather than O(n log n), classically], whereas that of QuickSelect remains O(n). Explicit expressions for the implied constants are provided by our combinatorial-analytic methods.Introduction. Every student of a basic algorithms course is taught that, on average, the complexity of Quicksort is O(n log n), that of binary search is O(log n), and that of radix-exchange sort is O(n log n); see for instance [13,16]. Such statements are based on specific assumptions-that the comparison of data items (for the first two) and the comparison of symbols (for the third one) have unit cost-and they have the obvious merit of offering an easy-to-grasp picture of the complexity landscape. However, as noted by Sedgewick, these simplifying assumptions suffer from limitations: they do not make possible a precise assessment of the relative merits of algorithms and data structures that resort to different methods (e.g., comparison-based versus radix-based sorting) in a way that would satisfy the requirements of either information theory or algorithms engineering. Indeed, computation is not reduced to its simplest terms, namely, the manipulation of totally elementary symbols, such as bits, bytes, characters. Furthermore, such simplified analyses say little about a great many application contexts, in databases or natural language processing, for instance, where information is highly "non-atomic", in the sense that it does not plainly reduce to a single machine word.First, we observe that, for commonly used data models, the mean costs S n and K n of any algorithm under the symbol-comparison and the key-comparison model, respectively, are connected by the universal relation S n = K n · O(log n).(This results from the fact that at most O(log n) symbols suffice, with high probability, to distinguish n keys; cf. the analysis of the height of digital trees,
Is top-down organization design worth attempting at all, or should organizations simply let their members learn which patterns of interaction are valuable by themselves, through a bottom-up process? Our analysis of an agent-based computational model shows that weak enforcement of even a randomly selected formal structure in a top-down manner can usefully guide the bottom-up emergence of networks of intraorganizational interactions between agents. In the absence of formal structure, interactions are prone to decline within organizations, because maintaining interactions requires coordination but breaking them does not. Formal structure regenerates the network of interactions between agents, who can then learn which interactions to keep or discard. This “network regeneration effect” of formal structure offers a rationale for the importance of top-down organization design, even if the design is limited in accuracy and enforcement. The online appendix is available at https://doi.org/10.1287/mnsc.2017.2807 . This paper was accepted by Sendil Ethiraj, organizations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.